text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 107–116 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 107 Learning Compressed Sentence Representations for On-Device Text Processing Dinghan Shen1∗, Pengyu Cheng1∗, Dhanasekar Sundararaman1 Xinyuan Zhang1, Qian Yang1, Meng Tang3, Asli Celikyilmaz2, Lawrence Carin1 1 Duke University 2 Microsoft Research 3 Stanford University [email protected] Abstract Vector representations of sentences, trained on massive text corpora, are widely used as generic sentence embeddings across a variety of NLP problems. The learned representations are generally assumed to be continuous and real-valued, giving rise to a large memory footprint and slow retrieval speed, which hinders their applicability to low-resource (memory and computation) platforms, such as mobile devices. In this paper, we propose four different strategies to transform continuous and generic sentence embeddings into a binarized form, while preserving their rich semantic information. The introduced methods are evaluated across a wide range of downstream tasks, where the binarized sentence embeddings are demonstrated to degrade performance by only about 2% relative to their continuous counterparts, while reducing the storage requirement by over 98%. Moreover, with the learned binary representations, the semantic relatedness of two sentences can be evaluated by simply calculating their Hamming distance, which is more computational efficient compared with the inner product operation between continuous embeddings. Detailed analysis and case study further validate the effectiveness of proposed methods. 1 Introduction Learning general-purpose sentence representations from large training corpora has received widespread attention in recent years. The learned sentence embeddings can encapsulate rich prior knowledge of natural language, which has been demonstrated to facilitate a variety of downstream tasks (without fine-tuning the encoder weights). The generic sentence embeddings can be trained either in an unsupervised manner (Kiros et al., 2015; Hill et al., 2016; Jernite et al., 2017; Gan ∗Equal contribution. et al., 2017; Logeswaran and Lee, 2018; Pagliardini et al., 2018), or with supervised tasks such as paraphrase identification (Wieting et al., 2016), natural language inference (Conneau et al., 2017), discourse relation classification (Nie et al., 2017), machine translation (Wieting and Gimpel, 2018), etc. Significant effort has been devoted to designing better training objectives for learning sentence embeddings. However, prior methods typically assume that the general-purpose sentence representations are continuous and real-valued. However, this assumption is sub-optimal from the following perspectives: i) the sentence embeddings require large storage or memory footprint; ii) it is computationally expensive to retrieve semanticallysimilar sentences, since every sentence representation in the database needs to be compared, and the inner product operation is computationally involved. These two disadvantages hinder the applicability of generic sentence representations to mobile devices, where a relatively tiny memory footprint and low computational capacity are typically available (Ravi and Kozareva, 2018). In this paper, we aim to mitigate the above issues by binarizing the continuous sentence embeddings. Consequently, the embeddings require much smaller footprint, and similar sentences can be obtained by simply selecting those with closest binary codes in the Hamming space (Kiros and Chan, 2018). One simple idea is to naively binarize the continuous vectors by setting a hard threshold. However, we find that this strategy leads to significant performance drop in the empirical results. Besides, the dimension of the binary sentence embeddings cannot be flexibly chosen with this strategy, further limiting the practice use of the direct binarization method. In this regard, we propose three alternative strategies to parametrize the transformation from 108 pre-trained generic continuous embeddings to their binary forms. Our exploration spans from simple operations, such as a random projection, to deep neural network models, such as a regularized autoencoder. Particularly, we introduce a semantic-preserving objective, which is augmented with the standard autoenoder architecture to encourage abstracting informative binary codes. InferSent (Conneau et al., 2017) is employed as the testbed sentence embeddings in our experiments, but the binarization schemes proposed here can easily be extended to other pretrained general-purpose sentence embeddings. We evaluate the quality of the learned general-purpose binary representations using the SentEval toolkit (Conneau et al., 2017). It is observed that the inferred binary codes successfully maintain the semantic features contained in the continuous embeddings, and only lead to around 2% performance drop on a set of downstream NLP tasks, while requiring merely 1.5% memory footprint of their continuous counterparts. Moreover, on several sentence matching benchmarks, we demonstrate that the relatedness between a sentence pair can be evaluated by simply calculating the Hamming distance between their binary codes, which perform on par with or even superior than measuring the cosine similarity between continuous embeddings (see Table 1). Note that computing the Hamming distance is much more computationally efficient than the inner product operation in a continuous space. We further perform a K-nearest neighbor sentence retrieval experiment on the SNLI dataset (Bowman et al., 2015), and show that those semanticallysimilar sentences can indeed be efficiently retrieved with off-the-shelf binary sentence representations. Summarizing, our contributions in this paper are as follows: i) to the best of our knowledge, we conduct the first systematic exploration on learning general-purpose binarized (memory-efficient) sentence representations, and four different strategies are proposed; ii) an autoencoder architecture with a carefullydesigned semantic-preserving loss exhibits strong empirical results on a set of downstream NLP tasks; iii) more importantly, we demonstrate, on several sentence-matching datasets, that simply evaluating the Hamming distance over binary representations performs on par or even better than calculating the cosine similarity between their continuous counterparts (which is less computationallyefficient). 2 Related Work Sentence representations pre-trained from a large amount of data have been shown to be effective when transferred to a wide range of downstream tasks. Prior work along this line can be roughly divided into two categories: i) pre-trained models that require fine-tuning on the specific transferring task (Dai and Le, 2015; Ruder and Howard, 2018; Radford et al., 2018; Devlin et al., 2018; Cer et al., 2018); ii) methods that extract general-purpose sentence embeddings, which can be effectively applied to downstream NLP tasks without finetuning the encoder parameters (Kiros et al., 2015; Hill et al., 2016; Jernite et al., 2017; Gan et al., 2017; Adi et al., 2017; Logeswaran and Lee, 2018; Pagliardini et al., 2018; Tang and de Sa, 2018). Our proposed methods belong to the second category and provide a generic and easy-to-use encoder to extract highly informative sentence representations. However, our work is unique since the embeddings inferred from our models are binarized and compact, and thus possess the advantages of small memory footprint and much faster sentence retrieval. Learning memory-efficient embeddings with deep neural networks has attracted substantial attention recently. One general strategy towards this goal is to extract discrete or binary data representations (Jang et al., 2016; Shu and Nakayama, 2017; Dai et al., 2017; Chen et al., 2018; Shen et al., 2018; Tissier et al., 2019). Binarized embeddings are especially attractive because they are more memory-efficient (relative to discrete embeddings), and they also enjoy the advantages of fast retrieval based upon a Hamming distance calculation. Previous work along this line in NLP has mainly focused on learning compact representations at the word-level (Shu and Nakayama, 2017; Chen et al., 2018; Tissier et al., 2019), while much less effort has been devoted to extracting binarized embeddings at the sentence-level. Our work aims to bridge this gap, and serves as an initial attempt to facilitate the deployment of state-of-the-art sentence embeddings on on-device mobile applications. Our work is also related to prior research on semantic hashing, which aims to learn binary 109 text embeddings specifically for the information retrieval task (Salakhutdinov and Hinton, 2009; Zhang et al., 2010; Wang et al., 2014; Xu et al., 2015; Shen et al., 2018). However, these methods are typically trained and evaluated on documents that belong to a specific domain, and thus cannot serve as generic binary sentence representation applicable to a wide variety of NLP taks. In contrast, our model is trained on large corpora and seeks to provide general-purpose binary representations that can be leveraged for various application scenarios. 3 Proposed Approach We aim to produce compact and binarized representations from continuous sentence embeddings, and preserve the associated semantic information. Let x and f denote, respectively, an input sentence and the function defined by a pre-trained generalpurpose sentence encoder. Thus, f(x) represents the continuous embeddings extracted by the encoder. The goal of our model is to learn a universal transformation g that can convert f(x) to highly informative binary sentence representations, i.e., g(f(x)), which can be used as generic features for a collection of downstream tasks. We explore four strategies to parametrize the transformation g. 3.1 Hard Threshold We use h and b to denote the continuous and binary sentence embeddings, respectively, and L denotes the dimension of h. The first method to binarize the continuous representations is to simply convert each dimension to either 0 or 1 based on a hard threshold. This strategy requires no training and directly operates on pre-trained continuous embeddings. Suppose s is the hard threshold, we have, for i = 1, 2, ......, L: b(i) = 1h(i)>s = sign(h(i) −s) + 1 2 , (1) One potential issue of this direct binarization method is that the information contained in the continuous representations may be largely lost, since there is no training objective encouraging the preservation of semantic information in the produced binary codes (Shen et al., 2018). Another disadvantage is that the length of the resulting binary code must be the same as the original continuous representation, and can not be flexibly chosen. In practice, however, we may want to learn shorter binary embeddings to save more memory footprint or computation. 3.2 Random Projection To tackle the limitation of the above direct binarization method, we consider an alternative strategy that requires no training either: simply applying a random projection over the pre-trained continuous representations. Wieting and Kiela (2018) has shown that random sentence encoders can effectively construct universal sentence embeddings from word vectors, while possessing the flexibility of adaptively altering the embedding dimensions. Here, we are interested in exploring whether a random projection would also work well while transforming continuous sentence representations into their binary counterparts. We randomly initialize a matrix W ∈RD×L, where D denotes the dimension of the resulting binary representations. Inspired by the standard initialization heuristic employed in (Glorot and Bengio, 2010; Wieting and Kiela, 2018), the values of the matrix are initialized as sampled uniformly. For i = 1, 2, . . . , D and j = 1, 2, . . . , L, we have: Wi,j ∼Uniform(−1 √ D , 1 √ D ), (2) After converting the continuous sentence embeddings to the desired dimension D with the matrix randomly initialized above, we further apply the operation in (1) to binarize it into the discrete/compact form. The dimension D can be set arbitrarily with this approach, which is easily applicable to any pre-trained sentence embeddings (since no training is needed). This strategy is related to the Locality-Sensitive Hashing (LSH) for inferring binary embeddings (Van Durme and Lall, 2010). 3.3 Principal Component Analysis We also consider an alternative strategy to adaptively choose the dimension of the resulting binary representations. Specifically, Principal Component Analysis (PCA) is utilized to reduce the dimensionality of pre-trained continuous embeddings. Given a set of sentences {xi}N i=1 and their corresponding continuous embeddings {hi}N i=1 ⊂RL, we learn a projection matrix to reduce the embedding dimensions while keeping the embeddings distinct as much as possible. After centralizing the embeddings as hi = hi −1 N PN i=1 hi, the data, as 110 0 <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+Hic bVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR 08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVT wxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4T fzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+Hic bVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR 08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVT wxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4T fzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+Hic bVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR 08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVT wxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4T fzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+Hic bVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR 08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVT wxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4T fzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> 1 <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0= ">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zM i1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ /Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0 Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1my KdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0= ">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zM i1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ /Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0 Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1my KdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0= ">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zM i1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ /Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0 Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1my KdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0= ">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zM i1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ /Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0 Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1my KdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw=</latexit> 0 <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+Hic bVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR 08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVT wxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4T fzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+Hic bVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR 08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVT wxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4T fzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+Hic bVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR 08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVT wxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4T fzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+Hic bVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR 08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVT wxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4T fzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> 0 <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSw MxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME 1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6K Yu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1C HBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSw MxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME 1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6K Yu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1C HBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSw MxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME 1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6K Yu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1C HBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSw MxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME 1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6K Yu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1C HBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> 1 <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij1 7M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0 ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMt OGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQ yE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0 SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4 BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij1 7M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0 ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMt OGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQ yE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0 SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4 BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij1 7M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0 ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMt OGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQ yE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0 SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4 BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij1 7M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0 ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMt OGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQ yE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0 SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4 BhVSTcw=</latexit> 1 <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0= ">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zM i1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ /Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0 Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1my KdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0= ">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zM i1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ /Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0 Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1my KdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0= ">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zM i1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ /Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0 Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1my KdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0= ">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zM i1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ /Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0 Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1my KdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw=</latexit> 1 <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> 0 <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4 T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oF WNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4 T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oF WNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4 T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oF WNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4 T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oF WNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> 1 <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> 1 <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> 0 <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4 T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oF WNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4 T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oF WNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4 T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oF WNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4 T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oF WNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> 1 <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> 0 <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4 T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oF WNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4 T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oF WNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4 T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oF WNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> <latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8m2TY0yS5JVqhL/4FXPXsTr/4bj/4 T03YPtvXBwO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SpqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oF WNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit> 1 <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> <latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij17M0=">AB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lp ol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up/TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nI q2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx0oAkUBvACr/Dm5d679+F9zltLXjFzDAv wvn4BhVSTcw=</latexit> 𝒉 𝒃 Threshold 𝒉 𝒃 Threshold 𝒃# Random Projection or PCA Encoder Decoder 𝒉 𝒃 𝒃# 𝒉$ Threshold (a) (b) (c) Semantic-preserving Loss Figure 1: Proposed model architectures: (a) direct binarization with a hard threshold s; (b) reducing the dimensionality with either a random projection or PCA, followed by a binarization step; (c) an encoding-decoding framework with an additional semantic-preserving loss. a matrix H = (h1, h2, . . . , hN), has the singular value decomposition (SVD): H = UΛV T , where Λ is an L × N matrix with descending singular values of X on its diagonal, with U and V orthogonal matrices. Then the correlation matrix can be written as: HHT = UΛ2U T . Assume that the diagonal matrix Λ2 = diag(λ1, λ2, . . . , λL) has descending elements λ1 ≥λ2 ≥· · · ≥λL ≥ 0. We select first D rows of U as our projection matrix W = U1:D, then the correlation matrix of WH is WHHT W T = diag(λ1, λ2, . . . , λD), which indicates that the embeddings are projected to D independent and most distinctive axes. After projecting continuous embeddings to a representative lower dimensional space, we apply the hard threshold function at the position 0 to obtain the binary representations (since the embeddings are zero-centered). 3.4 Autoencoder Architecture The methods proposed above suffer from the common issue that the model objective does not explicitly encourage the learned binary codes to retain the semantic information of the original continuous embeddings, and a separate binarization step is employed after training. To address this shortcoming, we further consider an autoencoder architecture, that leverages the reconstruction loss to hopefully endow the learned binary representations with more information. Specifically, an encoder network is utilized to transform the continuous into a binary latent vector, which is then reconstructed back with a decoder network. For the encoder network, we use a matrix operation, followed by a binarization step, to extract useful features (similar to the random projection setup). Thus, for i = 1, 2, . . . , D, we have: b(i) = 1σ(Wi·h+k(i))>s(i) = sign(σ(Wi · h + k(i)) −s(i)) + 1 2 , (3) where k is the bias term and k(i) corresponds to the i-th element of k. s(i) denotes the threshold determining whether the i-th bit is 0 or 1. During training, we may use either deterministic or stochastic binarization upon the latent variable. For the deterministic case, s(i) = 0.5 for all dimensions; in the stochastic case, s(i) is uniformly sampled as: s(i) ∼Uniform(0, 1). We conduct an empirical comparison between these two binarization strategies in Section 4. Prior work have shown that linear decoders are favorable for learning binary codes under the encoder-decoder framework (Carreira-Perpin´an and Raziperchikolaei, 2015; Dai et al., 2017; Shen et al., 2018). Inspired by these results, we employ a linear transformation to reconstruct the original continuous embeddings from the binary codes: ˆh(i) = W ′ i · b + k′(i), (4) where W ′ and k′ are weight and bias term respectively, which are learned. The mean square error between h and ˆh is employed as the reconstruction loss: Lrec = 1 D D X i=1 (h(i) −ˆh(i))2, (5) This objective imposes the binary vector b to encode more information from h (leading to smaller reconstruction error). Straight-through (ST) estimator (Hinton, 2012) is utilized to estimate the gradients for the binary variable. The autoencoder 111 model is optimized by minimizing the reconstruction loss for all sentences. After training, the encoder network is leveraged as the transformation to convert the pre-trained continuous embeddings into the binary form. 3.4.1 Semantic-preserving Regularizer Although the reconstruction objective can help the binary variable to endow with richer semantics, there is no loss that explicitly encourages the binary vectors to preserve the similarity information contained in the original continuous embeddings. Consequently, the model may lead to small reconstruction error but yield sub-optimal binary representations (Tissier et al., 2019). To improve the semantic-preserving property of the inferred binary embeddings, we introduce an additional objective term. Consider a triple group of sentences (xα, xβ, xγ), whose continuous embeddings are (hα, hβ, hγ), respectively. Suppose that the cosine similarity between hα and hβ is larger than that between hβ and hγ, then it is desirable that the Hamming distance between bα and bβ should be smaller than that between bβ and bγ (notably, both large cosine similarity and small Hamming distance indicate that two sentences are semantically-similar). Let dc(·, ·) and dh(·, ·) denote the cosine similarity and Hamming distance (in the continuous and binary embedding space), respectively. Define lα,β,γ as an indicator such that, lα,β,γ = 1 if dc(hα, hβ) ≥dc(hβ, hγ), and lα,β,γ = −1 otherwise. The semantic-preserving regularizer is then defined as: Lsp = X α,β,γ max{0, lα,β,γ[dh(bα, bβ) −dh(bβ, bγ)]}, (6) By penalizing Lsp, the learned transformation function g is explicitly encouraged to retain the semantic similarity information of the original continuous embeddings. Thus, the entire objective function to be optimized is: L = Lrec + λspLsp, (7) where λsp controls the relative weight between the reconstruction loss (Lrec) and semantic-preserving loss (Lsp). 3.5 Discussion Another possible strategy is to directly train the general-purpose binary embeddings from scratch, i.e., jointly optimizing the continuous embeddings training objective and continuous-to-binary parameterization. However, our initial attempts demonstrate that this strategy leads to inferior empirical results. This observation is consistent with the results reported in (Kiros and Chan, 2018). Specifically, a binarization layer is directly appended over the InferSent architecture (Conneau et al., 2017) during training, which gives rise to much larger drop in terms of the embeddings’ quality (we have conducted empirical comparisons with (Kiros and Chan, 2018) in Table 1). Therefore, here we focus on learning universal binary embeddings based on pretained continuous sentence representations. 4 Experimental setup 4.1 Pre-trained Continuous Embeddings Our proposed model aims to produce highly informative binary sentence embeddings based upon pre-trained continuous representations. In this paper, we utilize InferSent (Conneau et al., 2017) as the continuous embeddings (given its effectiveness and widespread use). Note that all four proposed strategies can be easily extended to other pre-trained general-purpose sentence embeddings as well. Specifically, a bidirectional LSTM architecture along with a max-pooling operation over the hidden units is employed as the sentence encoder, and the model parameters are optimized on the natural language inference tasks, i.e., Standford Natural Language Inference (SNLI) (Bowman et al., 2015) and Multi-Genre Natural Language Inference (MultiNLI) datasets (Williams et al., 2017). 4.2 Training Details Our model is trained using Adam (Kingma and Ba, 2014), with a value 1 × 10−5 as the learning rate for all the parameters. The number of bits (i.e., dimension) of the binary representation is set as 512, 1024, 2048 or 4096, and the best choice for each model is chosen on the validation set, and the corresponding test results are presented in Table 1. The batch size is chosen as 64 for all model variants. The hyperparameter over λsp is selected from {0.2, 0.5, 0.8, 1} on the validation set, and 0.8 is found to deliver the best empirical results. 112 Model Dim MR CR SUBJ MPQA SST STS14 STSB SICK-R MRPC Continuous (dense) sentence embeddings fastText-BoV 300 78.2 80.2 91.8 88.0 82.3 .65/.63 58.1/59.0 0.698 67.9/74.3 SkipThought 4800 76.5 80.1 93.6 87.1 82.0 .29/.35 41.0/41.7 0.595 57.9/66.6 SkipThought-LN 4800 79.4 83.1 93.7 89.3 82.9 .44/.45 InferSent-FF 4096 79.7 84.2 92.7 89.4 84.3 .68/.66 55.6/56.2 0.612 67.9/73.8 InferSent-G 4096 81.1 86.3 92.4 90.2 84.6 .68/.65 70.0/68.0 0.719 67.4/73.2 Binary (compact) sentence embeddings InferLite-short 256 73.7 81.2 83.2 86.2 78.4 0.61/63.4/63.3 0.597 61.7/70.1 InferLite-medium 1024 76.3 83.2 87.8 88.4 81.3 0.67/64.9/64.9 0.642 64.1/72.0 InferLite-long 4096 77.7 83.7 89.6 89.1 82.3 0.68/67.9/67.6 0.663 65.4/72.9 HT-binary 4096 76.6 79.9 91.0 88.4 80.6 .62/.60 55.8/53.6 0.652 65.6/70.4 Rand-binary 2048 78.7 82.7 90.4 88.9 81.3 .66/.63 65.1/62.3 0.704 65.7/70.8 PCA-binary 2048 78.4 84.5 90.7 89.4 81.0 .66/.65 63.7/62.8 0.518 65.0/ 69.7 AE-binary 2048 78.7 84.9 90.6 89.6 82.1 .68/.66 71.7/69.7 0.673 65.8/70.8 AE-binary-SP 2048 79.1 84.6 90.8 90.0 82.7 .69/.67 73.2/70.6 0.705 67.2/72.0 Table 1: Performance on the test set for 10 downstream tasks. The STS14, STSB and MRPC are evaluated with Pearson and Spearman correlations, and SICK-R is measured with Pearson correlation. All other datasets are evaluated with test accuracy. InferSent-G uses Glove (G) as the word embeddings, while InferSent-FF employs FastText (F) embeddings with Fixed (F) padding. The empirical results of InferLite with different lengths of binary embeddings, i.e., 256, 1024 and 4096, are considered. The training with the autoencoder setup takes only about 1 hour to converge, and thus can be readily applicable to even larger datasets. 4.3 Evaluation To facilitate comparisons with other baseline methods, we use SentEval toolkit1 (Conneau and Kiela, 2018) to evaluate the learned binary (compact) sentence embeddings. Concretely, the learned representations are tested on a series of downstream tasks to assess their transferability (with the encoder weights fixed), which can be categorized as follows: • Sentence classification, including sentiment analysis (MR, SST), product reviews (CR), subjectivity classification (SUBJ), opinion polarity detection (MPQA) and question type classification (TREC). A linear classifier is trained with the generic sentence embeddings as the input features. The default SentEval settings is used for all the datasets. • Sentence matching, which comprises semantic relatedness (SICK-R, STS14, STSB) and paraphrase detection (MRPC). Particularly, each pair of sentences in STS14 dataset is associated with a similarity score from 0 to 5 (as the corresponding label). Hamming distance between the binary representations is directly leveraged as the prediction score (without any classifier parameters). 1https://github.com/facebookresearch/SentEval For the sentence matching benchmarks, to allow fair comparison with the continuous embeddings, we do not use the same classifier architecture in SentEval. Instead, we obtain the predicted relatedness by directly computing the cosine similarity between the continuous embeddings. Consequently, there are no classifier parameters for both the binary and continuous representations. The same valuation metrics in SentEval(Conneau and Kiela, 2018) are utilized for all the tasks. For MRPC, the predictions are made by simply judging whether a sentence pair’s score is larger or smaller than the averaged Hamming distance (or cosine similarity). 4.4 Baselines We consider several strong baselines to compare with the proposed methods, which include both continuous (dense) and binary (compact) representations. For the continuous generic sentence embeddings, we make comparisons with fastTextBoV (Joulin et al., 2016), Skip-Thought Vectors (Kiros et al., 2015) and InferSent (Conneau et al., 2017). As to the binary embeddings, we consider the binarized version of InferLite (Kiros and Chan, 2018), which, as far as we are concerned, is the only general-purpose binary representations baseline reported. 5 Experimental Results We experimented with five model variants to learn general-purpose binary embeddings: HTbinary (hard threshold, which is selected from 113 {0, 0.01, 0.1} on the validation set), Rand-binary (random projection), PCA-binary (reduce the dimensionality with principal component analysis), AE-binary (autoencoder with the reconstruction objective) and AE-binary-SP (autoencoder with both the reconstruction objective and SemanticPreserving loss). Our code will be released to encourage future research. 5.1 Task transfer evaluation We evalaute the binary sentence representations produced by different methods with a set of transferring tasks. The results are shown in Table 1. The proposed autoencoder architecture generally demonstrates the best results. Especially while combined with the semantic-preserving loss defined in (7), AE-binary-SP exhibits higher performance compared with a standard autoencoder. It is worth noting that the Rand-binary and PCAbinary model variants also show competitive performance despite their simplicity. These strategies are also quite promising given that no training is required given the pre-trained continuous sentence representations. Another important result is that, the AE-binarySP achieves competitive results relative to the InferSent, leading to only about 2% loss on most datasets and even performing at par with InferSent on several datasets, such as the MPQA and STS14 datasets. On the sentence matching tasks, the yielded binary codes are evaluated by merely utilizing the hamming distance features (as mentioned above). To allow fair comparison, we compare the predicted scores with the cosine similarity scores based upon the continuous representations (there are no additional parameters for the classifier). The binary codes brings out promising empirical results relative to their continuous counterparts, and even slightly outperform InferSent on the STS14 dataset. We also found that our AE-binary-SP model variant consistently demonstrate superior results than the InferLite baselines, which optimize the NLI objective directly over the binary representations. This may be attributed to the difficulty of backpropagating gradients through discrete/binary variables, and would be an interesting direction for future research. 5.2 Nearest Neighbor Retrieval Case Study One major advantage of binary sentence representations is that the similarity of two sentences can be evaluated by merely calculating the hamming distance between their binary codes. To gain more intuition regarding the semantic information encoded in the binary embeddings, we convert all the sentences in the SNLI dataset into continuous and binary vectors (with InferSent-G and AE-binary-SP, respectively). The top-3 closet sentences are retrieved based upon the corresponding metrics, and the resulting samples are shown in Table 2. It can be observed that the sentences selected based upon the Hamming distance indeed convey very similar semantic meanings. In some cases, the results with binary codes are even more reasonable compared with the continuous embeddings. For example, for the first query, all three sentences in the left column relate to “watching a movie”, while one of the sentences in the right column is about “sleeping”. Retrieval Speed The bitwise comparison is much faster than the element-wise multiplication operation (between real-valued vectors) (Tissier et al., 2019). To verify the speed improvement, we sample 10000 sentence pairs from SNLI and extract their continuous and binary embeddings (with the same dimension of 4096), respectively. We record the time to compute the cosine similarity and hamming distance between the corresponding representations. With our Python implementation, it takes 3.67µs and 288ns respectively, indicating that calculating the Hamming distance is over 12 times faster. Our implementation is not optimized, and the running time of computing Hamming distance can be further improved (to be proportional to the number of different bits, rather than the input length2). 5.3 Ablation Study 5.3.1 The effect of semantic-preserving loss To investigate the importance of incorporating the locality-sensitive regularizer, we select different values of λsp (ranging from 0.0 to 1.0) and explore how the transfer results would change accordingly. The λsp controls the relative weight of the semantic-preserving loss term. As shown in Table 3, augmenting the semantic-preserving loss consistently improves the quality of learned binary embeddings, while the best test accuracy on the MR dataset is obtained with λsp = 0.8. 2https://en.wikipedia.org/wiki/ Hamming_distance 114 Hamming Distance (binary embeddings) Cosine Similarity (continuous embeddings) Query: Several people are sitting in a movie theater . A group of people watching a movie at a theater . A group of people watching a movie at a theater . A crowd of people are watching a movie indoors . A man is watching a movie in a theater . A man is watching a movie in a theater . Some people are sleeping on a sofa in front of the television . Query: A woman crossing a busy downtown street . A lady is walking down a busy street . A woman walking on the street downtown . A woman is on a crowded street . A lady is walking down a busy street . A woman walking on the street downtown . A man and woman walking down a busy street . Query: A well dressed man standing in front of piece of artwork . A well dressed man standing in front of an abstract fence painting . A man wearing headphones is standing in front of a poster . A man wearing headphones is standing in front of a poster . A man standing in front of a chalkboard points at a drawing . A man in a blue shirt standing in front of a garage-like structure painted with geometric designs . A man in a blue shirt standing in front of a garage-like structure painted with geometric designs . Query: A woman is sitting at a bar eating a hamburger . A woman sitting eating a sandwich . A woman is sitting in a cafe eating lunch . A woman is sitting in a cafe eating lunch . A woman is eating at a diner . The woman is eating a hotdog in the middle of her bedroom . A woman is eating her meal at a resturant . Query: Group of men trying to catch fish with a fishing net . Two men are on a boat trying to fish for food during a sunset . There are three men on a fishing boat trying to catch bass . There are three men on a fishing boat trying to catch bass . Two men are trying to fish . Two men pull a fishing net up into their red boat . Two men are on a boat trying to fish for food during a sunset . Table 2: Nearest neighbor retrieval results on the SNLI dataset. Given a a query sentence, the left column shows the top-3 retrieved samples based upon the hamming distance with all sentences’ binary representations, while the right column exhibits the samples according to the cosine similarity of their continuous embeddings. λsp 0.0 0.2 0.5 0.8 1.0 Accuracy 78.2 78.5 78.5 79.1 78.4 Table 3: Ablation study for the AE-binary-SP model with different choices of λsp (evaluated with test accuracy on the MR dataset). 5.3.2 Sampling strategy As discussed in Section 3.4, the binary latent vector b can be obtained with either a deterministic or stochastically sampled threshold. We compare these two sampling strategies on several downstream tasks. As illustrated in Figure 2, setting a fixed threshold demonstrates better empirical performance on all the datasets. Therefore, deterministic threshold is employed for all the autoencoder model variants in our experiments. MR CR MPQA SUBJ SST SICKE MRPC Dataset 0.65 0.70 0.75 0.80 0.85 0.90 0.95 Performance Deterministic Stochastic Figure 2: The comparison between deterministic and stochastic sampling for the autoencoder strategy. 5.3.3 The effect of embedding dimension Except for the hard threshold method, other three proposed strategies all possess the flexibility of adaptively choosing the dimension of learned binary representations. To explore the sensitivity of 512 1024 2048 4096 Number of Bits 71 72 73 74 75 76 77 78 79 80 Accuracy (%) Random PCA AE AE-SP Figure 3: The test accuracy of different model on the MR dataset across 512, 1024, 2048, 4096 bits for the learned binary representations. extracted binary embeddings to their dimensions, we run four model variants (Rand-binary, PCAbinary, AE-binary, AE-binary-SP) with different number of bits (i.e., 512, 1024, 2048, 4096), and their corresponding results on the MR dataset are shown in Figure 3. For the AE-binary and AE-binary-SP models, longer binary codes consistently deliver better results. While for the Rand-binary and PCA-binary variants, the quality of inferred representations is much less sensitive to the embedding dimension. Notably, these two strategies exhibit competitive performance even with only 512 bits. Therefore, in the case where less memory footprint or little training is preferred, Rand-binary and PCA-binary could be more judicious choices. 6 Conclusion This paper presents a first step towards learning binary and general-purpose sentence representations that allow for efficient storage and fast retrieval over massive corpora. To this end, we ex115 plore four distinct strategies to convert pre-trained continuous sentence embeddings into a binarized form. Notably, a regularized autoencoder augmented with semantic-preserving loss exhibits the best empirical results, degrading performance by only around 2% while saving over 98% memory footprint. Besides, two other model variants with a random projection or PCA transformation require no training and demonstrate competitive embedding quality even with relatively small dimensions. Experiments on nearest-neighbor sentence retrieval further validate the effectiveness of proposed framework. References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. CoRR, abs/1608.04207. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP. Miguel A Carreira-Perpin´an and Ramin Raziperchikolaei. 2015. Hashing with binary autoencoders. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 557–566. Daniel Cer, Yinfei Yang, Sheng yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder. CoRR, abs/1803.11175. Ting Chen, Martin Renqiang Min, and Yizhou Sun. 2018. Learning k-way d-dimensional discrete codes for compact embedding representations. arXiv preprint arXiv:1806.09464. Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. arXiv preprint arXiv:1803.05449. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In EMNLP. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural information processing systems, pages 3079–3087. Bo Dai, Ruiqi Guo, Sanjiv Kumar, Niao He, and Le Song. 2017. Stochastic generative hashing. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 913–922. JMLR. org. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Zhe Gan, Yunchen Pu, Ricardo Henao, Chunyuan Li, Xiaodong He, and Lawrence Carin. 2017. Learning generic sentence representations using convolutional neural networks. In EMNLP. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In HLT-NAACL. G Hinton. 2012. Neural networks for machine learning. coursera,[video lectures]. Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. Yacine Jernite, Samuel R. Bowman, and David A Sontag. 2017. Discourse-based objectives for fast unsupervised sentence representation learning. CoRR, abs/1705.00557. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Jamie Kiros and William Chan. 2018. Inferlite: Simple universal sentence representations from natural language inference data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4868–4874. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. In NIPS. Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. ICLR. Allen Nie, Erin D. Bennett, and Noah D. Goodman. 2017. Dissent: Sentence representation learning from explicit discourse relations. CoRR, abs/1710.04334. Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised learning of sentence embeddings using compositional n-gram features. In NAACL-HLT. 116 Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Sujith Ravi and Zornitsa Kozareva. 2018. Selfgoverning neural networks for on-device short text classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 804–810. Sebastian Ruder and Jeremy Howard. 2018. Universal language model fine-tuning for text classification. In ACL. Ruslan Salakhutdinov and Geoffrey Hinton. 2009. Semantic hashing. International Journal of Approximate Reasoning, 50(7):969–978. Dinghan Shen, Qinliang Su, Paidamoyo Chapfuwa, Wenlin Wang, Guoyin Wang, Lawrence Carin, and Ricardo Henao. 2018. Nash: Toward end-to-end neural architecture for generative semantic hashing. In ACL. Raphael Shu and Hideki Nakayama. 2017. Compressing word embeddings via deep compositional code learning. arXiv preprint arXiv:1711.01068. Shuai Tang and Virginia R de Sa. 2018. Improving sentence representations with multi-view frameworks. arXiv preprint arXiv:1810.01064. Julien Tissier, Amaury Habrard, and Christophe Gravier. 2019. Near-lossless binarization of word embeddings. AAAI. Benjamin Van Durme and Ashwin Lall. 2010. Online generation of locality sensitive hash signatures. In Proceedings of the ACL 2010 Conference Short Papers, pages 231–235. Association for Computational Linguistics. Jingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. 2014. Hashing for similarity search: A survey. arXiv preprint arXiv:1408.2927. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sentence embeddings. CoRR, abs/1511.08198. John Wieting and Kevin Gimpel. 2018. Paranmt-50m: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In ACL. John Wieting and Douwe Kiela. 2018. No training required: Exploring random encoders for sentence classification. CoRR, abs/1901.10444. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426. Jiaming Xu, Peng Wang, Guanhua Tian, Bo Xu, Jun Zhao, Fangyuan Wang, and Hongwei Hao. 2015. Convolutional neural networks for text hashing. In Twenty-Fourth International Joint Conference on Artificial Intelligence. Dell Zhang, Jun Wang, Deng Cai, and Jinsong Lu. 2010. Self-taught hashing for fast similarity search. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, pages 18–25. ACM.
2019
11
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1154–1159 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1154 Neural News Recommendation with Topic-Aware News Representation Chuhan Wu1, Fangzhao Wu2, Mingxiao An3, Yongfeng Huang1, and Xing Xie2 1Department of Electronic Engineering, Tsinghua University, Beijing 100084, China 2Microsoft Research Asia, Beijing 100080, China 3University of Science and Technology of China, Hefei 230026, China [email protected], {fangzwu,xingx}@microsoft.com, [email protected], [email protected] Abstract News recommendation can help users find interested news and alleviate information overload. The topic information of news is critical for learning accurate news and user representations for news recommendation. However, it is not considered in many existing news recommendation methods. In this paper, we propose a neural news recommendation approach with topic-aware news representations. The core of our approach is a topic-aware news encoder and a user encoder. In the news encoder we learn representations of news from their titles via CNN networks and apply attention networks to select important words. In addition, we propose to learn topic-aware news representations by jointly training the news encoder with an auxiliary topic classification task. In the user encoder we learn the representations of users from their browsed news and use attention networks to select informative news for user representation learning. Extensive experiments on a real-world dataset validate the effectiveness of our approach. 1 Introduction Online news platforms such as Google News and MSN News have attracted hundreds of millions of users to read news online (Das et al., 2007; Lavie et al., 2010). Massive news are generated everyday, making it impossible for users to read all news to find their interested content (Phelan et al., 2011). Thus, personalized news recommendation is very important for online news platforms to help users find their interested news and alleviate information overload (IJntema et al., 2010). Learning accurate representations of news and users is critical for news recommendation (Wu et al., 2019b,a). Several deep learning based methods have been proposed for this task (Okura et al., 2017; Wang et al., 2018; Kumar et al., 2017; Khattar et al., 2018; Zheng et al., 2018). For example, Title Topic James Harden's incredible heroics lift Rockets over Warriors Sports These Are Some of The Safest Airlines in the World Travel Weekend snowstorm forecast from Midwest to East Coast Unlabeled Figure 1: Three example news articles. Okura et al. (2017) proposed to learn news representations from news bodies via denoising autoencoders, and learn user representations from the representations of their browsed news via a gated recurrent unit (GRU) network. Wang et al. (2018) proposed to learn news representations from news titles via a knowledge-aware convolutional neural network (CNN), and learn user representations from news representations using the similarity between candidate news and browsed news. However, these methods do not take the topic information of news into consideration. Our work is motivated by the following observations. First, the topic information of news is useful for news recommendation. For example, if a user clicks many news with the topic “sport”, we can infer she is probably interested in sports. Thus, exploiting the topic information of news has the potential to learn more accurate news and user representations. Second, not all news articles contain topic labels, since it is very expensive and timeconsuming to manually annotate the massive news articles emerging everyday. Thus, it is not suitable to directly incorporate the topic labels of news as model input. Third, different words in the same news may have different informativeness in representing news. For example, in Fig. 1 the word “Airlines” is more informative than “Some”. Besides, different news may also have different importance for user representation. For instance, the first news in Fig. 1 is more informative than the third one in inferring the interest of users. In this paper, we propose a neural news recom1155 mendation approach with topic-aware news representations (TANR) which exploit the useful topic information in news. The core of our approach is a topic-aware news encoder and a user encoder. In the news encoder, we learn the representations of news from their titles by capturing the local contexts via CNNs. Since different words may have different informativeness for news representation, we apply attention network to select important words for news representation learning. In addition, we propose to learn topic-aware news representations by jointly training the news encoder with an auxiliary topic classification task. In the user encoder, we learn representations of users from the representations of their browsed news. Since different news may have different informativeness for user representation, we apply attention network to select informative news for user representation learning. Extensive experiments are conducted on a real-world dataset. The results show our approach can effectively improve the performance of news recommendation. 2 Our Approach In this section, we first introduce our basic neural news recommendation model. Then we introduce how to learn topic-aware news representations. 2.1 Neural News Recommendation Model The architecture of our basic neural news recommendation model is shown in Fig. 2. It consists of three major modules, i.e., news encoder, user encoder and click predictor. News Encoder. The news encoder module is used to learn representations of news from their titles. It contains three layers. The first one is word embedding, which converts a news title from a word sequence into a vector sequence. Denote a news title as [w1, w2, ..., wM], where M is title length. It is converted into word vector sequence [e1, e2, ..., eM] via a word embedding matrix. The second layer is a CNN network (Kim, 2014). Local contexts are important for understanding news titles. For example, in the news title “90th Birthday of Mickey mouse”, the local contexts of “mouse” such as “Mickey” is useful for inferring it is a comic character name. Thus, we use CNN to learn contextual word representations by capturing local contexts. The CNN layer takes the word vectors as input, and outputs the contextual word representations [c1, c2, ..., cM]. 𝒓𝒓1 𝐷𝐷1 𝐷𝐷𝒊𝒊 Browsed News 𝐷𝐷𝑁𝑁 𝒓𝒓𝒊𝒊 𝒓𝒓𝑁𝑁 Candidate News 𝐷𝐷𝑐𝑐 𝒖𝒖 News Encoder 𝒓𝒓𝑐𝑐 Dot ෝ𝒚𝒚 CNN Word Embedding 𝒆𝒆1 𝒆𝒆𝑖𝑖 𝒆𝒆𝑀𝑀 𝒄𝒄1 𝒄𝒄𝑖𝑖 𝒄𝒄𝑀𝑀 𝑤𝑤1 𝑤𝑤𝑖𝑖 𝑤𝑤𝑀𝑀 𝛼𝛼1 𝒕𝒕 𝛼𝛼2 𝒕𝒕 𝛼𝛼𝑀𝑀 𝒕𝒕 𝛼𝛼𝑁𝑁 𝒏𝒏 𝛼𝛼𝑖𝑖 𝒏𝒏 𝛼𝛼1 𝒏𝒏 User Encoder Click Predictor 𝒓𝒓 Click Probability 𝒒𝒒𝑡𝑡 𝒒𝒒𝒏𝒏 Output News Encoder News Encoder News Encoder News Encoder Figure 2: The framework of the basic model. The third layer is an attention network. Different words in the same news title may have different importance in representing news. For example, in the first news of Fig. 1, the word “Rockets” is more informative than “over” for news representation. Thus, we propose to use attention mechanism to select important words in news titles to learn informative news representations. Denote the attention weight of the ith word in a news title as αt i: at i = qT t tanh(Vt × ct i + vt), (1) αt i = exp(at i) PM j=1 exp(at j) , (2) where Vt and vt are parameters, qt is the attention query vector. The final representation of a news title is the summation of the contextual representations of its words weighted by their attention weight, i.e., r = PM i=1 αt ici. User Encoder. The user encoder module is used to learn the representations of users from the representations of their browsed news. Different news browsed by the same user may have different informativeness for representing this user. For example, the news “The best movies in 2018” is more informative than the news “Winter storms next week” in inferring user interests. Therefore, we apply a news attention network to select important news to learn more informative user representations. Denote the attention weight of the ith browsed news as αn i : an i = qT n tanh(Vn × ri + vn), (3) αn i = exp(an i ) PN j=1 exp(an j ) , (4) where qn, Vn and vn are the parameters, and N is the number of browsed news. The final repre1156 sentation of a user is the summation of the representations of her browsed news weighted by their attentions, i.e., u = PN i=1 αn i ri. Click Predictor. The click predictor module is used to predict the probability of a user clicking a candidate news based on their hidden representations. Denote the representation of a candidate news Dc as rc. Following (Okura et al., 2017), the click probability score ˆy is calculated by the inner product of the representation vectors of the user and the candidate news, i.e., ˆy = uT rc. Motivated by (Huang et al., 2013), we propose to use negative sampling techniques for model training. For each news browsed by a user (denoted as positive sample), we randomly sample K news displayed in the same impression but not click by this user as negative samples. We then jointly predict the click probability scores of the positive news ˆy+ and the K negative news [ˆy− 1 , ˆy− 2 , ..., ˆy− K]. In this way, we formulate the news click prediction problem as a pseudo K + 1way classification task. The posterior click probability of a positive sample is calculated as follows: pi = exp(ˆy+ i ) exp(ˆy+ i ) + PK j=1 exp(ˆy− i,j) . (5) The loss function for news recommendation is the negative log-likelihood of all positive samples: LNewsRec = − X i∈S log(pi), (6) where S is the set of positive training samples. 2.2 Topic-Aware News Encoder The topic information of news is useful for news recommendation. For example, if a user browses many news with the topic “sport”, then she may be interested in sports. Thus, exploiting the news topics has the potential to improve the representations of news and users. However, not all news in online news platforms contain topic labels, since it is very costly and time-consuming to annotate the massive news emerging everyday. Thus, instead of incorporating news topics as model input, we propose to learn topic-aware news encoder which can extract topic information from news titles by jointly training it with an auxiliary news topic classification task, as shown in Fig. 3. We propose a news topic classification model for this task, which consists of a news encoder module and a topic predictor module. The news encoder 𝐷𝐷 News Encoder Output Dense ො𝒕𝒕 𝒓𝒓 Predicted Category Figure 3: The framework of topic-aware news encoder. module is shared with the news recommendation model. The topic predictor is used to predict the topic probability distribution from news representation as follows: ˆt = softmax(Wt × r + bt), (7) where Wt and bt are parameters, and ˆt is the predicted topic distribution. The loss function of the topic classification task is formulated as follows: LTopic = −1 Nt Nt X i=1 Kc X k=1 ti,k log(ˆti,k), (8) where Nt is the number of news with topic labels, Kc is the number of topic categories, and ti,k and ˆti,k are the gold and predicted probability of the ith news in the k-th topic category. We jointly train the news recommendation and topic classification tasks. The overall loss function is a weighted summation of the news recommendation and topic classification losses: L = LNewsRec + λLTopic, (9) where λ is a positive coefficient. Since the news recommendation and the topic classification tasks share the same news encoder, via joint training, the news recommendation model can capture the topic information to learn topic-aware news and user representations for news recommendation. 3 Experiments 3.1 Datasets and Experimental Settings We conducted experiments on a real-world dataset which is collected from MSN News1 logs in one month (from 12/13/2018 to 01/12/2019). The basic statistics of this dataset are summarized in Table 1. In addition, the topic distributions in our dataset are illustrated in Fig. 4. We used the logs in the last week for test, and the rest for training. Besides, we randomly sampled 10% of training data as the validation set. 1https://www.msn.com/en-us/news 1157 # users 10,000 avg. # words per title 11.29 # news 42,255 # topic categories 14 # impressions 445,230 # positive samples 489,644 # samples 7,141,584 # negative samples 6,651,940 Table 1: Statistics of our dataset. Figure 4: Topic distributions in our dataset. In our experiments, word embeddings are 300dimensional and were initialized by the pretrained Glove embedding (Pennington et al., 2014). The CNN network has 400 filters, and their window sizes are 3. The negative sampling ratio K is 4 and the coefficient λ is 0.2. Adam (Kingma and Ba, 2014) is used as the optimization algorithm, and the batch size is 64. These hyperparameters were selected according to the validation set. The metrics used for result evaluation in our experiments include AUC, MRR, nDCG@5 and nDCG@10. We repeated each experiment 10 times and reported the average results. 3.2 Performance Evaluation We evaluate the performance of our TANR approach by comparing it with several baseline methods, including: (1) LibFM (Rendle, 2012), a feature based matrix factorization method for recommendation; (2) CNN (Kim, 2014), using Kim CNN to learn news representations from news titles, and building user representations via max pooling; (3) DSSM (Huang et al., 2013), using the deep structured semantic model by regarding the concatenation of browsed news titles as the query and candidate news as the documents; (4) Wide&Deep (Cheng et al., 2016), a combination of a wide linear channel and a deep neural network channel; (5) DeepFM (Guo et al., 2017), a combination of factorization machines and neural networks; (6) DFM (Lian et al., 2018), a deep fuMethods AUC MRR nDCG@5 nDCG@10 LibFM 0.5660 0.2924 0.3015 0.3932 CNN 0.5689 0.2956 0.3043 0.3955 DSSM 0.6009 0.3099 0.3261 0.4185 Wide&Deep 0.5735 0.2989 0.3094 0.3996 DeepFM 0.5774 0.3031 0.3122 0.4019 DFM 0.5860 0.3034 0.3175 0.4067 DKN 0.5869 0.3044 0.3184 0.4071 GRU 0.6102 0.2811 0.3035 0.3952 TANR-basic 0.6221 0.3246 0.3487 0.4329 TANR* 0.6289 0.3315 0.3544 0.4392 Table 2: The results of different methods. *The improvement is significant at p < 0.01. sion model by combining dense layers with different depths and using attention mechanism to select important features; (7) GRU (Okura et al., 2017), using autoencoders to learn news representations and using a GRU network to learn user representations; (8) DKN (Wang et al., 2018), a neural news recommendation method which can utilize entity information in knowledge graphs via a knowledge-aware CNN; (9) TANR-basic, our basic neural news recommendation model; (10) TANR, our approach with topic-aware news representations. The results of different methods are summarized in Table 2. From Table 2, We have several observations. First, the methods based on neural networks (e.g., CNN, DSSM and TANR) outperform LibFM. This is because neural networks can learn better news and user representations than traditional matrix factorization methods. Second, both TANR-basic and TANR can outperform many baseline methods. This is because our approaches can select important words and news for learning informative news and user representations via a hierarchical attention network, which is not considered in baseline methods. Third, TANR consistently outperforms TANR-basic. It validates the news topics are useful for news recommendation, and our approach can effectively exploit the topic information. Then, we want to evaluate the performance of our approach in topic classification. The performance in Fscore over each topic category is shown in Fig. 5. From Fig. 5, we find the classification of most topic classes is satisfactory, except for the class “kids”. This may be because the training data in this class is too scarce and difficult to be recognized. These results show that our approach can capture useful topic information by training the news encoder with an auxiliary topic classification task to learn topic-aware news representations. 1158 0 . 0 0 . Fscore 0 0 . . 心 0 0 . 1 . 0 autos lifestyle finance entertainment -, foodanddrink -8 travel r、n tv sports . movies weather . music health video kids , , , , j I l I I I l I I j I I I ] Figure 5: Performance of topic classification. Figure 6: Effectiveness of different attention networks. 3.3 Effectiveness of Hierarchical Attention We conducted experiments to explore the hierarchical attentions in our approach. The results are shown in Fig. 6. We find the news-level attention network can effectively improve the performance of our approach. This is because different news usually have different informativeness in representing users, and selecting important news can help learn more informative user representations. In addition, the word-level attention network is also useful. This is because different words usually have different importance for representing news, and our approach can select important words to learn informative news representations. Moreover, combining both attention networks can further improve the performance of our approach. These results validate the effectiveness of hierarchical attentions in our approach. 3.4 Influence of Hyperparameter In this section, we explore the influence of the coefficient λ in Eq. (9) on our approach. It controls the relative importance of the topic classification task. The results are shown in Fig. 7. We find if λ is too small, the performance of our approach Figure 7: Influence of the hyperparameter λ. is not optimal, since the useful topic information is not fully exploited. Thus, the performance improves when λ increases from 0. However, when λ becomes too large, the performance of our approach declines. This is because the topic classification task is over-emphasized and the news recommendation task is not fully respected. A moderate value of λ (e.g., 0.2) is the most appropriate. 4 Conclusion In this paper, we propose a neural news recommendation approach with topic-aware news representations. In our approach we propose a new encoder to learn news representations from news titles, and use attention network to select important words. In addition, we propose to train a topic-aware news encoder via jointly training it with an auxiliary topic classification task to extract the topic information in news. In addition, we propose a user encoder to learn representations of users from their browsed news, and use attention network to select important news. Extensive experiments on a real-world dataset validate the effectiveness of our approach. Acknowledgments The authors would like to thank Microsoft News for providing technical support and data in the experiments, and Jiun-Hung Chen (Microsoft News) and Ying Qiao (Microsoft News) for their support and discussions. This work was supported by the National Key Research and Development Program of China under Grant number 2018YFC1604002, the National Natural Science Foundation of China under Grant numbers U1836204, U1705261, U1636113, U1536201, and U1536207, and the Tsinghua University Initiative Scientific Research Program under Grant number 20181080368. 1159 References Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al. 2016. Wide & deep learning for recommender systems. In DLRS, pages 7–10. ACM. Abhinandan S Das, Mayur Datar, Ashutosh Garg, and Shyam Rajaram. 2007. Google news personalization: scalable online collaborative filtering. In WWW, pages 271–280. ACM. Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017. Deepfm: a factorization-machine based neural network for ctr prediction. In AAAI, pages 1725–1731. AAAI Press. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In CIKM, pages 2333–2338. Wouter IJntema, Frank Goossen, Flavius Frasincar, and Frederik Hogenboom. 2010. Ontology-based news recommendation. In Proceedings of the 2010 EDBT/ICDT Workshops, page 16. ACM. Dhruv Khattar, Vaibhav Kumar, Vasudeva Varma, and Manish Gupta. 2018. Weave& rec: A word embedding based 3-d convolutional network for news recommendation. In CIKM, pages 1855–1858. ACM. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP, pages 1746– 1751. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Vaibhav Kumar, Dhruv Khattar, Shashank Gupta, and Vasudeva Varma. 2017. Word semantics based 3-d convolutional neural networks for news recommendation. In 2017 IEEE International Conference on Data Mining Workshops, pages 761–764. Talia Lavie, Michal Sela, Ilit Oppenheim, Ohad Inbar, and Joachim Meyer. 2010. User attitudes towards news content personalization. International journal of human-computer studies, 68(8):483–495. Jianxun Lian, Fuzheng Zhang, Xing Xie, and Guangzhong Sun. 2018. Towards better representation learning for personalized news recommendation: a multi-channel deep fusion approach. In IJCAI, pages 3805–3811. Shumpei Okura, Yukihiro Tagami, Shingo Ono, and Akira Tajima. 2017. Embedding-based news recommendation for millions of users. In KDD, pages 1933–1942. ACM. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Owen Phelan, Kevin McCarthy, Mike Bennett, and Barry Smyth. 2011. Terms of a feather: Contentbased news recommendation and discovery using twitter. In ECIR, pages 448–459. Springer. Steffen Rendle. 2012. Factorization machines with libfm. TIST, 3(3):57. Hongwei Wang, Fuzheng Zhang, Xing Xie, and Minyi Guo. 2018. Dkn: Deep knowledge-aware network for news recommendation. In WWW, pages 1835– 1844. Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, and Xing Xie. 2019a. Neural news recommendation with attentive multiview learning. In IJCAI. Chuhan Wu, Fangzhao Wu, Junxin Liu, and Yongfeng Huang. 2019b. Npa: Neural news recommendation with personalized attention. In KDD. ACM. Guanjie Zheng, Fuzheng Zhang, Zihan Zheng, Yang Xiang, Nicholas Jing Yuan, Xing Xie, and Zhenhui Li. 2018. Drn: A deep reinforcement learning framework for news recommendation. In WWW, pages 167–176.
2019
110
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1160–1166 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1160 Poetry to Prose Conversion in Sanskrit as a Linearisation Task: A case for Low-Resource Languages Amrith Krishna1, Vishnu Dutt Sharma2, Bishal Santra1, Aishik Chakraborty3∗, Pavankumar Satuluri4 and Pawan Goyal1 1Dept. of Computer Science and Engineering, IIT Kharagpur 2American Express India Pvt Ltd 3School of Computer Science, McGill University 4Chinmaya Vishwavidyapeeth {amrith,bishal.santra}@iitkgp.ac.in, [email protected] [email protected] Abstract The word ordering in a Sanskrit verse is often not aligned with its corresponding prose order. Conversion of the verse to its corresponding prose helps in better comprehension of the construction. Owing to the resource constraints, we formulate this task as a word ordering (linearisation) task. In doing so, we completely ignore the word arrangement at the verse side. k¯avya guru, the approach we propose, essentially consists of a pipeline of two pretraining steps followed by a seq2seq model. The first pretraining step learns task specific token embeddings from pretrained embeddings. In the next step, we generate multiple hypotheses for possible word arrangements of the input (Wang et al., 2018). We then use them as inputs to a neural seq2seq model for the final prediction. We empirically show that the hypotheses generated by our pretraining step result in predictions that consistently outperform predictions based on the original order in the verse. Overall, k¯avya guru outperforms current state of the art models in linearisation for the poetry to prose conversion task in Sanskrit. 1 Introduction Prosody plays a key role in the word arrangement in Sanskrit Poetry. The word arrangement in a verse should result in a sequence of syllables which adhere to one of the prescribed meters in Sanskrit Prosody (Scharf et al., 2015). As a result, the configurational information of the words in a verse is not aligned with its verbal cognition (Bhatta, 1990; Dennis, 2005). Obtaining the proper word ordering, called as the prose ordering, from a verse is often considered a task which requires linguistic expertise (Shukla et al., 2016; Kulkarni et al., 2015). ∗Work done while the author was at IIT Kharagpur In this work, we use neural sequence generation models for automatic conversion of poetry to prose. Lack of sufficient poetry-prose parallel data is an impediment in framing the problem as a seq2seq task (Gu et al., 2018).1 Hence, we formulate our task as that of a word linearisation task (He et al., 2009). In linearisation, we arrange a bag of words into a grammatical and fluent sentence (Liu et al., 2015). This eliminates the need for parallel data, as the poetry order is not anymore relevant at the input. A neural-LM based model from Schmaltz et al. (2016) and a seq2seq model form Wiseman and Rush (2016) are the current state of the art (SOTA) models in the linearisation task. We first show that a seq2seq model with gated CNNs (Gehring et al., 2017), using a sequence level loss (Edunov et al., 2018) can outperform both the SOTA models for the Sanskrit poetry linearisation task. But using a seq2seq model brings non-determinism to the model as the final prediction of the system is dependent on the order at which the words are input to the encoder (Vinyals et al., 2016). We resolve this, by using a pretraining approach (Wang et al., 2018) to obtain an initial ordering of the words, to be fed to the final model. This approach consistently performs better than using the original poetry order as input. Further, we find that generating multiple hypotheses2 using this component (Wang et al., 2018), to be fed to the final seq2seq component, results in improving the results by about 8 BLEU points. Additionally, we use a pretraining approach to learn task specific word embeddings by combining multiple word embeddings (Kiela et al., 2018). We call our final configuration as k¯avya guru. ‘k¯avya guru’ is a compound word in Sanskrit, which roughly translates to ‘an expert in prosody’. 1Refer to Appendix A for details on our preliminary experiments in this direction. 2Empirically shown to be 10 1161 Figure 1: Configuration for k¯avya guru, demonstrated for a 3 word sentence with a prose order ‘r¯amah. vidy¯alayam gacchati’. English translation: “R¯ama goes to School”. We show generation of only one hypothesis from SAWO. 2 Poetry to Prose as Linearisation Given a verse sequence x1, x2......xn, our task is to rearrange the words in the verse to obtain its prose order. As shown in Figure 2, k¯avya guru takes the Bag of Words (BoW) S as the input to the system. We use two pretraining steps prior to the seq2seq component in our approach. The first step, ‘DME’, combines multiple pretrained word embeddings, say {w11, w12, w13} for a token x1 ∈S, into a single meta-embedding, wDME 1 . The second component, ‘SAWO’, is a linearisation model in itself, which we use to generate multiple hypotheses, i.e., different permutations of the tokens, to be used as input to the final ‘seq2seq’ component. Pretraining Step 1 – Dynamic Meta Embeddings (DME): Given a token xi ∈S, we obtain r different pre-trained word embeddings, represented as {wi1, wi2....wir}. Following Kiela et al. (2018), we learn a single task specific embedding, wDME i using weighted sum of all the r embeddings. The scalar weights for combining the embeddings are learnt using self-attention, with a training objective to minimise the negative log likelihood of the sentences, given in the prose order. Pretraining Step 2 – Self-Attention Based Word-Ordering (SAWO): SAWO allows us to generate multiple permutations of words as hypotheses, which can be used as input to a seq2seq model. Here, we use a word ordering model itself as a pretraining step, proposed in Wang et al. (2018). From step 1, we obtain the DME embeddings, {wDME 1 , wDME 2 , ....wDME n }, one each for each token in S. For each token in S, we also learn additional embeddings, {sa1, sa2, ....san}, using the self-attention mechanism. These additional vectors are obtained using the weighted sum of all the DME embeddings in the input BoW S, where the weights are learned using the selfattention mechanism (Wang et al., 2018; Vaswani et al., 2017). As shown in Figure 2, the DME vector wDME i and the vector sai are then concatenated to form a representation for the token Xi. The concatenated vectors so obtained for all the tokens in S, form the input to the decoder. We use an LSTM based decoder, initialised with the average of DME embeddings of all the tokens ({wDME 1 , wDME 2 , ....wDME n }) at the input. A special token is used as the input in the first time-step, and based on the predictions from the decoder, the concatenated vectors are input in the subsequent time-steps. The decoder is constrained to predict from the list of words in BoW, which are not yet predicted at a given instance. We use a beamsearch based decoding strategy (Schmaltz et al., 2016) to obtain top-k hypotheses for the system. For both the pretraining steps, the training objective is to minimise the negative log likelihood of the ground truth (prose order sentences), and both the components are trained jointly. The multiple hypotheses so generated are used as independent inputs to the seq2seq model, with the prose order as their corresponding ground truth for training. In the figure 2, we show only one hypothesis from SAWO. This helps us to obtain a k-fold in1162 crease in the amount of available training data. The seq2seq model: We use the seq2seq model comprising of gated CNNs (Gehring et al., 2017) for the task. Our training objective is a weighted combination of the expected risk minimisation (RISK) and the token level negative log likelihood with label smoothing (TokLS) (Edunov et al., 2018). Here, we use a uniform prior distribution over the vocabulary for label smoothing. RISK minimises the expected value of a given cost function, BLEU in our case, over the space of candidate sequences. (1) LRisk = X u∈U cost(ˆy, u) p(u|x′) P u′∈U p(u′|x′) Here U is the candidate set, with |U|= 16 and the sequences in U are obtained using Beam Search. The size for the beam search was determined empirically.3 ˆy is the reference target sequence, i.e., the prose. x′ is the input sequence to the model, which is obtained from SAWO. In LRisk, cost(ˆy, u) = 1 −BLEU(ˆy, u), where 0 ≤BLEU(ˆy, u) ≤1. Similar to Wiseman and Rush (2016), we constrain the prediction of tokens to those available at the input during testing. Majority Vote Policy: For an input verse, SAWO generates multiple hypotheses and seq2seq then predicts a sequence corresponding to each of these, of the same size as the input. To get a single final output, we use a ‘Majority Vote’ policy. For each position, starting from left, we find the token which was predicted the most number of times at that position among all the seq2seq outputs, and choose it as the token in the final output. 3 Experiments Dataset: We obtain 17,017 parallel poetry-prose data from the epic “R¯am¯ayan. a’’.4 Given that about 90 % of the vocabulary appears less than 5 times in the corpus, we use BPE to learn a new vocabulary (Sennrich et al., 2016). We add about 95,000 prose-order sentences from Wikipedia into our training data, as the poetry order input is irrelevant for linearisation.5 3We experimented with beam sizes from 1 to 32, in powers of 2. Since the increase in beam size from 16 to 32 did not result in significant improvements in system performance, we set the beam size as 16. 4Filtered from 18,250 verses. The remaining were ignored due to corrupted word constructions. 5For heuristics used for identifying prose order sentences, refer Appendix A Data Preparation: With a vocabulary of 12,000, we learn embeddings for the BPE entries using Word2vec (Mikolov et al., 2013), FastText (Bojanowski et al., 2017), and character embeddings from Hellwig and Nehrdich (2018). The embeddings were trained on 0.8 million sentences (6.5 million tokens) collected from multiple corpora including DCS (Hellwig, 2011), Wikipedia and Vedabase6. Finally, we combine the word embeddings using DME (Kiela et al., 2018). From the set of 17,017 parallel poetry-prose corpus, we use 13,000 sentence pairs for training, 1,000 for validation and the remaining 3,017 sentence pairs for testing. The sentences in test data are not used in any part of training or for learning the embeddings. Evaluation Metrics: Linearisation tasks are generally reported using BLEU (Papineni et al., 2002) score (Hasler et al., 2017; Belz et al., 2011). Additionally, we report Kendall’s Tau (τ) and perfect match scores for the models. Perfect match is the fraction of sentences where the prediction matches exactly with the ground truth. Kendall’s Tau (τ) is calculated based on the number of inversions needed to transform a predicted sequence to the ordering in the reference sequence. τ is used as a metric in sentence ordering tasks (Lapata, 2006), and is defined as 1 m Pm i=1 1 −2 × inversions count/ n 2  (Logeswaran et al., 2018; Lapata, 2003). In all these three metrics, a higher score always corresponds to a better performance of the system. 3.1 Baselines LSTM Based Linearisation Model (LinLSTM): LinLSTM is an LSTM based neural language model (LM) proposed by Schmaltz et al. (2016). Sequences in sentence/prose order are fed to the system for learning the LM. Beam search, constrained to predict only from the bag of words given as input, is used for decoding. The authors obtained SOTA results in their experiments on the Penn Treebank, even outperforming different syntax based linearisation models (Zhang and Clark, 2015; Zhang, 2013). The best result for the model was obtained using a beam size of 512, and we use the same setting for our experiments. 6https://www.vedabase.com/en/sb 1163 System Augmentation τ BLEU PM(%) LinLSTM Ramayan.a dataset 61.47 35.51 8.22 + Wikipedia Prose 58.86 31.39 7.14 BSO Ramayan.a dataset 58.62 29.16 7.61 + Wikipedia Prose 65.38 41.22 12.97 + DME 68.45 44.29 19.69 + SAWO 72.89 52.37 24.56 k¯avya guru Ramayan.a dataset 59.27 31.55 8.62 + Wikipedia Prose 66.82 42.91 13.52 + DME 70.8 48.33 20.21 + SAWO 74.32 54.49 25.72 + Self-Attention 75.58 55.26 26.08 (a) Results for all the three competing models. The ‘+’ sign indicates that the augmentation is added to the configuration in the row above it. k τ BLEU PM 1 71.14 48.26 20.15 5 74.15 53.74 25.02 10 75.58 55.26 26.08 (b) Results for k¯avya guru when trained (and at test-time) using different values of k at the SAWO pretraining step. Encoding τ BLEU PM IAST 73.64 53.46 23.73 SLP1 73.79 53.91 24.16 Syllable 75.58 55.26 26.08 (c) Results for k¯avya guru, when using different sequence encoding schemes. Table 1: Experimental results for different configurations and different settings, performed on the test data. Table b and Table c use the configuration in the last row of Table a, which is the best performing configuration of k¯avya guru. Seq2Seq with Beam Search Optimisation (BSO): The seq2seq model uses a max-margin approach with a search based loss, designed to penalise the errors made during beam search (Wiseman and Rush, 2016). Here scores for different possible sequences are predicted and then they are ranked using beam search. The loss penalises the function when the gold sequence falls off the beam during training. For our experiments, we use a beam size of 15 for testing and 14 for training, the setting with best reported scores in Wiseman and Rush (2016). 3.2 Results Table 1a provides the results for all the three systems under different settings. k¯avya guru reports the best results with a BLEU score of 55.26, outperforming the baselines. We apply both the pretraining components and the ‘Majority Vote’ policy (§2) to both the seq2seq models, i.e. ‘BSO’ and the proposed model ‘k¯avya guru’. From Table 1a, it is evident that infusing proseonly training data from Wikipedia, and applying both the pretraining steps leads to significant7 and consistent improvements for both the seq2seq models. LinLSTM shows a decrease in its performance when the dataset is augmented with sentences from Wikipedia. We obtain the best results for k¯avya guru when self-attention 7For all the reported results, we use approximate randomisation approach for significance tests. All the reported values have a p-value < 0.02 was added to the seq2seq component of the model (Edunov et al., 2018; Paulus et al., 2018) (final row in Table 1a). Table 1c shows that the textencoding/transliteration scheme in which a sequence is represented affects the results. k¯avya guru performs the best when it uses syllable level encoding of input, as compared to character level transliteration schemes such as IAST8 or SLP19. Effect of increase in training set size due to SAWO: Using SAWO, we can generate multiple word order hypotheses as the input to the seq2seq model. Results from Table 1b show that generating multiple hypotheses leads to improvements in the system performance.7 It might be puzzling that k¯avya guru contains two components, i.e. SAWO and seq2seq, where both of them perform essentially the same task of word ordering. This might create an impression of redundancy in k¯avya guru. But, a configuration that uses only the DME and SAWO (without the seq2seq), results in a BLEU score of 33.8 as against 48.26 for k¯avya guru (Table 1b, k = 1). Now, this brings the validity of SAWO component into question. To check this, instead of generating hypotheses using SAWO, we used 100 random permutations10 for a given sentence as input to the seq2seq component. The 8https://en.wikipedia.org/wiki/ International_Alphabet_of_Sanskrit_ Transliteration 9https://en.wikipedia.org/wiki/SLP1 10Empirically decided from 1 to 100 random permutations with a step size of 10 1164 first 3 rows of BSO and k¯avya guru in Table 1a show the results for non-SAWO configurations. These configurations do not outperform SAWO based configurations, in spite of using as many as 10 times the candidates than those used in SAWO based configuration. For SAWO (non-SAWO), we find that the system performances tend to saturate with number of hypotheses greater than 10 (100). Effect of using word order in the verse at inference: During inference, the test-set sentences are passed as input in the verse order to each of the k¯avya guru configurations in Table 1a. k¯avya guru+DME configuration achieves the best result for this. But here also, the system performance drops to τ = 68.92 and BLEU = 45.63, from 70.8 and 48.33, respectively. To discount the effect of majority vote policy used in SAWO, we consider predictions based on individual SAWO hypotheses. However, even the lowest τ score (70.61), obtained while using the 10th ranked hypothesis from SAWO, outperforms the predictions based on the verse order.7 4 Conclusion In this work, we attempt to address the poetry to prose conversion problem by formalising it as an LM based word linearisation task. We find that k¯avya guru outperforms the state of the art models in word linearisation for the task. Though tremendous progress has been made in digitising texts in Sanskrit, they still remain inaccessible largely due to lack of specific tools that can address linguistic peculiarities exhibited by the language (Krishna et al., 2017). From a pedagogical perspective, it will be beneficial for learners of the language to look into the prose of the verses for an easier comprehension of the concepts discussed in the verse. Acknowledgements We are grateful to Dr Amba Kulkarni and Dr Peter M Scharf for their valuable guidance and support throughout the work. We extend our gratitude to the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper. References Anja Belz, Mike White, Dominic Espinosa, Eric Kow, Deirdre Hogan, and Amanda Stent. 2011. The first surface realisation shared task: Overview and evaluation results. In Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation, pages 217–226, Nancy, France. Association for Computational Linguistics. Vinayak P. Bhatta. 1990. Theory of verbal cognition (bdabodha). Bulletin of the Deccan College Research Institute, 49:59–74. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Simon Dennis. 2005. A memory-based theory of verbal cognition. Cognitive Science, 29(2):145–193. Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc’Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to sequence learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 355–364. Association for Computational Linguistics. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, pages 1243– 1252. JMLR.org. Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O.K. Li. 2018. Universal neural machine translation for extremely low resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 344–354, New Orleans, Louisiana. Association for Computational Linguistics. Eva Hasler, Felix Stahlberg, Marcus Tomalin, Adria de Gispert, and Bill Byrne. 2017. A comparison of neural models for word ordering. In Proceedings of the 10th International Conference on Natural Language Generation, pages 208–212. Wei He, Haifeng Wang, Yuqing Guo, and Ting Liu. 2009. Dependency based chinese sentence realization. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 809–816, Suntec, Singapore. Association for Computational Linguistics. Oliver Hellwig. 2011. DCS - The Digital Corpus of Sanskrit. Oliver Hellwig. 2016. Detecting sentence boundaries in sanskrit texts. In Proceedings of COLING 2016, the 26th International Conference on Computational 1165 Linguistics: Technical Papers, pages 288–297, Osaka, Japan. The COLING 2016 Organizing Committee. Oliver Hellwig and Sebastian Nehrdich. 2018. Sanskrit word segmentation using character-level recurrent and convolutional neural networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2754–2763. Association for Computational Linguistics. Douwe Kiela, Changhan Wang, and Kyunghyun Cho. 2018. Dynamic meta-embeddings for improved sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1466–1477. Association for Computational Linguistics. Amrith Krishna, Pavan Kumar Satuluri, and Pawan Goyal. 2017. A dataset for sanskrit word segmentation. In Proceedings of the Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 105–114, Vancouver, Canada. Association for Computational Linguistics. Amba Kulkarni, Preethi Shukla, Pavankumar Satuluri, and Devanand Shukl. 2015. How Free is free Word Order in Sanskrit. The Sanskrit Library, USA. Mirella Lapata. 2003. Probabilistic text structuring: Experiments with sentence ordering. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 545–552, Sapporo, Japan. Association for Computational Linguistics. Mirella Lapata. 2006. Automatic evaluation of information ordering: Kendall’s tau. Computational Linguistics, 32(4):471–484. Yijia Liu, Yue Zhang, Wanxiang Che, and Bing Qin. 2015. Transition-based syntactic linearization. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 113–122, Denver, Colorado. Association for Computational Linguistics. Lajanugen Logeswaran, Honglak Lee, and Dragomir Radev. 2018. Sentence ordering and coherence modeling using recurrent neural networks. In AAAI Conference on Artificial Intelligence, pages 5285– 5292. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In International Conference on Learning Representations. Peter Scharf, Anuja Ajotikar, Sampada Savardekar, and Pawan Goyal. 2015. Distinctive features of poetic syntax preliminary results. Sanskrit syntax, pages 305–324. Allen Schmaltz, Alexander M. Rush, and Stuart Shieber. 2016. Word ordering without syntax. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2319–2324, Austin, Texas. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Preeti Shukla, Amba Kulkarni, and Devanand Shukla. 2016. Revival of ancient sanskrit teaching methods using computational platforms. In Bridging the gap between Sanskrit Computational Linguistics tools and management of Sanskrit Digital Libraries Workshop. ICON 2016, IIT BHU. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762. Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. 2016. Order matters: Sequence to sequence for sets. In International Conference on Learning Representations (ICLR). Wenhui Wang, Baobao Chang, and Mairgup Mansur. 2018. Improved dependency parsing using implicit word connections learned from unlabeled data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2857–2863. Association for Computational Linguistics. Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search optimization. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1296–1306, Austin, Texas. Association for Computational Linguistics. Yue Zhang. 2013. Partial-tree linearization: Generalized word ordering for text synthesis. In IJCAI, pages 2232–2238. Yue Zhang and Stephen Clark. 2015. Discriminative syntax-based word ordering for text generation. Computational Linguistics, 41(3):503–538. 1166 A Appendix Preliminary results using standard seq2seq models: First the problem was posed as a seq2seq problem, with poetry order as input and prose order as output. With a parallel training data of about 17,000 sentences, we obtained a BLEU score of less than 7 for various seq2seq models including Vaswani et al. (2017);Gehring et al. (2017); Vinyals et al. (2016). We then formulate the problem as a linearisation task. Infusion of sentences of Prose order: We obtain sentences which are available exclusively in prose order and use them to learn our models. We use sentences from Wikipedia for augmenting the R¯am¯ayan.a corpus for training. We obtain about 95,000 sentences from Wikipedia with an average of 7.63 words per sentence. We filter poetry verses from Wikipedia by matching them with the sentences in an existing corpus (DCS11), which is predominantly a poetry corpus. We also filter the sentences (and adjacent 3 lines in either of the directions) which end with a double dan.da, an end marker specifically used for verses (Hellwig, 2016). 11http://kjc-sv013.kjc.uni-heidelberg. de/dcs/index.php?contents=texte
2019
111
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1167–1172 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1167 Learning Emphasis Selection for Written Text in Visual Media from Crowd-Sourced Label Distributions Amirreza Shirani†, Franck Dernoncourt‡, Paul Asente‡, Nedim Lipka‡, Seokhwan Kim§, Jose Echevarria‡ and Thamar Solorio† †University of Houston ‡Adobe Research §Amazon Alexa AI †{ashirani,tsolorio}@uh.edu ‡{dernonco,asente,lipka,echevarr}@adobe.com §[email protected] Abstract In visual communication, text emphasis is used to increase the comprehension of written text and to convey the author’s intent. We study the problem of emphasis selection, i.e. choosing candidates for emphasis in short written text, to enable automated design assistance in authoring. Without knowing the author’s intent and only considering the input text, multiple emphasis selections are valid. We propose a model that employs end-to-end label distribution learning (LDL) on crowd-sourced data and predicts a selection distribution, capturing the inter-subjectivity (common-sense) in the audience as well as the ambiguity of the input. We compare the model with several baselines in which the problem is transformed to single-label learning by mapping label distributions to absolute labels via majority voting. 1 Introduction Visual communication relies heavily on images and short texts. Whether it is flyers, posters, ads, social media posts or motivational messages, it is usually highly designed to grab a viewer’s attention and convey a message in the most efficient way. For text, word emphasis is used to capture the intent better, removing the ambiguity that may exist in some plain texts. Word emphasis can clarify or even change the meaning of a sentence by drawing attention to some specific information. It can be done with colors, backgrounds, or fonts, or with styles like italic and boldface. Some graphic design applications such as Adobe Spark1 perform automatic text layout using templates that include images and text with different fonts and colors. However, their text layout algorithms are mainly driven by visual attributes like word length, rather than the semantics of the 1https://spark.adobe.com (a) (b) Figure 1: Two different text layouts emphasizing different parts of the sentence. text or the user’s intent, which can lead to unintended emphasis and the wrong message. Figure 1a shows an example that is aesthetically appealing but fails to effectively communicate its intent. Understanding the text would allow the system to propose a different layout that emphasizes words that contribute more to the communication of the intent, as shown in Figure 1b. We investigate models that aim to understand the most common interpretation of a short piece of text, so the right emphasis can be achieved automatically or interactively. The ultimate goal is to enable design assistance for the user during authoring. The main focus is on short text instances for social media, with a variety of examples from inspirational quotes to advertising slogans. We model emphasis using plain text with no additional context from the user or the rest of the design. This task differs from related ones in that word emphasis patterns are person- and domainspecific, making different selections valid depending on the audience and the intent. For example, in Figure 1b, some users might prefer to just emphasize “knowledge” or “good.” To tackle this, we model emphasis by learning label distributions (LDL) with a deep sequence labeling network and 1168 the KL-Divergence loss function. LDL allows us to effectively capture the label ambiguity and inter-subjectivity within the annotators. Unlike single- or multi-label learning, LDL allows direct modeling of different importance of each label to the instance (Geng, 2016). The proposed model yields good performance despite the small amount of training data and can be used as a baseline for this task for future evaluations. Our main contributions are: (1) Introducing a new NLP task: emphasis selection for short text instances as used in social media, learned from a new dataset. (2) Proposing a novel end-toend sequence labeling architecture utilizing LDL to model the emphasis words in a given text. (3) Defining evaluation metrics and providing comparisons with several baselines to assess the model performance. 2 Related Work A large amount of work in NLP addresses finding keywords or key-phrases in long texts from scientific articles, news, etc. (Augenstein et al., 2017; Zhang et al., 2016). Keyword detection mainly focuses on finding important nouns or noun phrases. In contrast, social media text is much shorter, and users tend to emphasize a subset of words with different roles to convey specific intent. Emphasis words are not necessarily the words with the highest or lowest frequency in the text. Often a high sentiment adjective can be emphasized, such as Hot in Hot Summer. Generally, word emphasis may express emotions, show contrast, capture a reader’s interest or clarify a message. In a different context, modeling word emphasis has been addressed in expressive prosody generation. Most studies detect emphasis words based on acoustic and prosodic features that exist in spoken data (Mishra et al., 2012; Chen and Pan, 2017). More recently, few works model emphasis from text to improve expressive prosody generation in modern Text-To-Speech (TTS) systems (Nakajima et al., 2014; Mass et al., 2018). For example, (Mass et al., 2018) trained a deep neural network model on audience-addressed speeches to predict word emphasis. The dataset consists of relatively long paragraphs which are labeled by four annotators based on words that clearly stand out in a recorded speech. Many approaches have been proposed to deal with annotations coming from multiple annotators by essentially transforming the problem into single-label learning. Some rely on majority voting e.g. (Laws et al., 2011). More recent works (Yang et al., 2018; Rodrigues et al., 2014; Rodrigues and Pereira, 2018) use different strategies to learn individual annotator expertise or reliability, helping to infer the true labels from noisy and sparse annotations. All these approaches share one key aspect: only one label sequence is correct and should be considered as ground truth. This is contrary to the ambiguous nature of our task, where different interpretations are possible. Our solution is to utilize label distribution learning (Subsection 3.2). LDL methods have been used before to solve various visual recognition problems such as facial age prediction (Rondeau and Alvarez, 2018; Gao et al., 2017). We are the first to introduce LDL for sequence labeling. 3 Emphasis Selection 3.1 Task Definition Given a sequence of words or tokens C = {x1, ..., xn}, we want to determine the subset S of words in C that are good candidates to emphasize, where 1 ≤|S| ≤n. 3.2 Label Distribution Learning We pose this task as a sequence labeling problem where the model assigns each token x from C a real number dx y to each possible label, representing the degree to which y describes x. Where dx y ∈[0, 1] and P y dx y = 1. We use IO scheme y ∈{I, O}, where “I” and “O” indicate emphasis and non-emphasis respectively. The final set of Si can be generated by selecting tokens with different strategies (Subsection 5.3). 3.3 Dataset We obtained 1,206 short text instances from Adobe Spark, which will be publicly available along with their annotations2. This collection contains a variety of subjects featured in flyers, posters, advertisement or motivational memes on social media. The dataset contains 7,550 tokens and the average number of tokens per instance is 6.16, ranging from 2 to 25 tokens. On average, each instance contains 2.38 emphases and the ratio of non-emphasis to emphasis tokens is 1.61. 2http://ritual.uh.edu/resources/ emphasis-2019/ 1169 Words A1 A2 A3 A4 A5 A6 A7 A8 A9 Freq. [I,O] Enjoy I I I I I O I O O [6,3] the O O O O O O O O O [0,9] Last O O O O I I O O O [2,7] Bit O O O O I I O O I [3,6] of O O O O O O O O O [0,9] Summer I I I O I O I I O [6,3] Table 1: A short text example from our collected dataset along with its nine annotations. We used Amazon Mechanical Turk and asked nine annotators to label each piece of text. To ensure high quality annotation, we included carefully-designed quality questions in 10 percent of the hits. We obtained a Fleiss’ kappa agreement (Fleiss, 1971) of 63.59, which compared to similar tasks proves the subjectivity and multi-answer nature of our problem. We noticed higher annotation agreement in shorter length instances (2 to 5 words). Having many extremely short pieces of text in the dataset (∼60%) increased the annotation agreement. We split up the data randomly into training (60%), development (10%) and test (30%) sets for further analysis. Table 1 shows an example of text annotated with the IO annotations. Ultimately, we compute the label distribution for each instance, which corresponds to the count per label normalized by the total number of annotations. 4 Model We use an LSTM-based sequence labeling model to learn emphasis patterns. Figure 2 shows the overall architecture of the proposed model (DLBiLSTM). Given a sequence of words, the model w1 w2 w3 w4 BiLSTM + Attention Fully connected layers [I,O] [I,O] [I,O] [I,O] Input Embedding Layer Sequence Layer Output Inference Layer Figure 2: DL-BiLSTM Architecture is to label each word with its appropriate label distribution. Words are represented with word embeddings for each input word sequence. We use two stacked bidirectional LSTM layers as an encoder to model word sequence information in both forward and backward directions. Having two BiLSTM layers helps to build a deeper feature extractor; having more than two does not help the performance as the model becomes too complicated. We investigate the impact of attention mechanisms to the model (Vinyals et al., 2015; Zhang et al., 2017), where attention weights ai represent the relative contribution of a specific word to the text representation. We compute ai at each output time i as follows: ai = softmax(vT tanh(Whhi + bh)) (1) zi = ai · hi (2) where hi is encoder hidden state and v and Wh are learnable parameters of the network. The output zi is the element-wise dot product of ai and hi. Subsequently, the inference layer assigns labels (probabilities) to each word using the hidden states of word sequence representations. This layer internally consists of two fully connected layers with size of 50. We use layer normalization (Ba et al., 2016) for improved results. 3 KL-Divergence Loss During the training phase, the Kullback-Leibler Divergence (KLDIV) (Kullback and Leibler, 1951) is used as the loss function. KL-DIV is a measure of how one probability distribution P is different from a second reference probability distribution Q: KL-DIV(P||Q) = X x∈X P(x) log Q(x) P(x) 5 Experimental Settings and Results 5.1 Training Details We use two different word representations: pretrained 100-dim GloVe embedding (Pennington et al., 2014), and 2048-dim ELMo embedding (Peters et al., 2018). We use BiLSTM layers with hidden size of 512 and 2048 when using GloVe and ELMo embeddings respectively. We use the Adam optimizer (Kingma and Ba, 2014) with the learning rate set to 0.001. In order to better train and to force the network finding different activation paths, we use two dropout layers with a rate of 0.5 in the sequence and inference layers. Finetuning is performed for 160 epochs, and the reported test result corresponds to the best accuracy obtained on the validation set. 3The implementation is available online: https:// github.com/RiTUAL-UH/emphasis-2019 1170 Model/Evals Matchm TopK MAX m=1 m=2 m=3 m=4 k=1 k=2 k=3 k=4 F F F F ROC AUC Label Distribution Learning Models M1 DL-BiLSTM+GloVe 54.8 69.4 77.2 81.6 47.5 68.2 78.1 83.6 0.874 M2 DL-BiLSTM+GloVe+Att 54.5 69.7 77.7 80.8 47.2 68.5 78.4 83.2 0.880 M3 DL-BiLSTM+ELMo 57.4 72.5 79.2 83.3 49.7 70.7 79.4 84.7 0.887 M4 DL-BiLSTM+ELMo+Att 56.2 72.8 77.9 83.8 48.7 71.0 78.5 85.0 0.883 Single Label Learning Models M5 SL-BiLSTM+GloVe 52.6 66.4 75.4 79.3 45.5 65.9 76.9 82.3 0.860 M6 SL-BiLSTM+GloVe+Att 52.3 66.1 77.2 78.5 45.3 65.6 78.1 81.7 0.862 M7 SL-BiLSTM+ELMO 53.7 68.7 76.9 80.5 46.5 67.7 77.9 83.0 0.865 M8 SL-BiLSTM+ELMo+Att 52.0 68.5 77.4 82.3 45.0 67.6 78.2 84.1 0.866 M9 CRF 44.0 65.3 73.0 79.2 38.1 65.0 75.3 82.2 0.818 Table 2: Experimental results of Label Distribution Learning and Single Label Learning models in three evaluation settings, Matchm, TopK, and MAX. F represents F1-score. (a) Model’s Output (b) Ground Truth Figure 3: Heatmap of emphases; highlighting words with model’s output and ground truth probabilities. 5.2 Baselines We compare our model against alternative setups in which the label distribution is mapped to binary labels using majority voting. We include the following single-label models: SL-BiLSTM This model has a similar architecture compared to the DL-BiLSTM model but the input is a sequence of mapped labels and the negative log likelihood is used as the loss function in the training phase. CRF This model is a Conditional Random Fields model (Lafferty et al., 2001) with handcrafted features including word identity, word suffix, word shape and word part-of-speech (POS) tag for the current and nearby words. The CRFsuite program (Okazaki, 2007) is used for this model. 5.3 Evaluation Settings To assess the performance of the model, we propose three different evaluation settings: Matchm For each instance x in the test set Dtest, we select a set S(x) m of m ∈{1 . . . 4} words with the top m probabilities according to the ground truth. Analogously, we select a prediction set ˆS(x) m for each m, based on the predicted probabilities. We define the metric Matchm as follows: Matchm := P x∈Dtest |S(x) m ∩ˆS(x) m |/(min(m, |x|)) |Dtest| TopK Similarly to Matchm, for each instance x, we select the top k = {1, 2, ..., 4} words with the highest probabilities from both ground truth and prediction distributions. Then Precision, Recall and F1-score per each k can be computed accordingly. MAX We map the ground truth and prediction distributions to absolute labels by selecting the class with the highest probability. Then we compute ROC AUC. (e.g. a token with label probability of [I = 0.75, O = 0.25] is mapped to “I”). 5.4 Results We run all models over 5 runs with different random seeds and report the scores of the best runs based on the dev set. Table 2 compares different models in terms of three evaluation settings. M1M4 are four variants of the DL-BiLSTM model. Considering all evaluation settings, LDL models (M1-M4) either outperform SL-BiLSTM models 1171 (M5-M8) or perform equally. Using ELMo instead of GloVe yields better results (M3 and M4). M3 and M4 with higher performance in all three metrics outperform the other models. Comparing the best results of both approaches, M3 and M4 with M7 and M8, we observe that both LDL results are statistically significant under paired ttest with 95% confidence interval. The improved performance of label distribution over single-label learning suggests that in LDL, the model exploits ordinal relationships among the classes during optimization, which results in better generalization. Our model is more successful in predicting words with higher human annotation agreement. As we increase the confidence threshold and only consider words with higher ground-truth agreement, our model is able to achieve better results. Figure 3 shows examples from the test set, with a heatmap showing the model’s predicted score and ground truth probabilities. 6 SemEval-2020 Benchmarking We are organizing a SemEval shared task on emphasis selection called “Task 10: Emphasis Selection for Written Text in Visual Media”. In order to set out a comparable baseline for this shared task, in this section, we report results of our models according to the SemEval setting defined for the task. After the submission of this paper, we continued to improve the quality of the annotated data by cleaning the data and fixing the annotations of some noisy instances. The SemEval version of Spark dataset contains 1,200 instances with a different split: 70% training, 10% development and 20% test sets. We choose Matchm as the evaluation metric for this shared task as it provides a comprehensive evaluation compared to MAX, as one can choose the value of m. Furthermore, compared to TopK, the Matchm metric can better handle cases where multiple tokens have the same label distribution according to the annotators in the ground truth. Table 3 shows the results of all nine models under the SemEval setting, using the Matchm evaluation metric. Similar to the results we showed in Table 2, M3 and M4 both perform competitively and outperform the other models. 7 Conclusion We introduced a new task, emphasis selection in short text instances. Its goal is to develop models that suggest which part of the text to emphaModel/Eval Matchm m=1 m=2 m=3 m=4 Label Distribution Learning Models M1 DL-BiLSTM+GloVe 54.6 69.2 76.5 81.9 M2 DL-BiLSTM+GloVe+Att 57.5 69.7 76.7 80.7 M3 DL-BiLSTM+ELMo 0.6 71.7 78.7 84.1 M4 DL-BiLSTM+ELMo+Att 59.6 72.7 77.7 84.6 Single Label Learning Models M5 SL-BiLSTM+GloVe 51.7 66.7 75.0 81.1 M6 SL-BiLSTM+GloVe+Att 52.9 66.5 73.6 0.8 M7 SL-BiLSTM+ELMo 54.2 69.0 77.9 83.0 M8 SL-BiLSTM+ELMo+Att 54.2 70.7 78.5 82.8 M9 CRF 45.4 66.0 72.8 80.2 Table 3: Experimental results in SemEval setting size. To tackle the subjective nature of the task, we propose a sequence labeling architecture that optimizes the model to learn label distributions by capturing the inter-subjectivity within the audience. We provide comparisons to models trained with other objective functions where the ground truth probabilities are mapped to binary labels and show that LDL is more effective in selecting the emphasis. As future work, we plan to investigate emphasis selection on a larger and more diverse dataset. We also plan to investigate the role of word sentiment and emotion intensity as well as more advanced language models such as BERT (Devlin et al., 2018) in modeling emphasis. Acknowledgement This research began during an internship at Adobe Research, and was sponsored in part by Adobe Research. We thank reviewers for their valuable suggestions. We also thank Amin Alipour and Niloofar Safifor comments that greatly improved the manuscript. References Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lakshmi Vikraman, and Andrew McCallum. 2017. Semeval 2017 task 10: Scienceie-extracting keyphrases and relations from scientific publications. arXiv preprint arXiv:1704.02853. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Yanju Chen and Rong Pan. 2017. Automatic emphatic information extraction from aligned acoustic data and its application on sentence compression. In Thirty-First AAAI Conference on Artificial Intelligence. 1172 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378. Bin-Bin Gao, Chao Xing, Chen-Wei Xie, Jianxin Wu, and Xin Geng. 2017. Deep label distribution learning with label ambiguity. IEEE Transactions on Image Processing, 26(6):2825–2838. Xin Geng. 2016. Label distribution learning. IEEE Transactions on Knowledge and Data Engineering, 28(7):1734–1748. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Solomon Kullback and Richard A Leibler. 1951. On information and sufficiency. The annals of mathematical statistics, 22(1):79–86. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Florian Laws, Christian Scheible, and Hinrich Sch¨utze. 2011. Active learning with amazon mechanical turk. In Proceedings of the conference on empirical methods in natural language processing, pages 1546– 1556. Association for Computational Linguistics. Yosi Mass, Slava Shechtman, Moran Mordechay, Ron Hoory, Oren Sar Shalom, Guy Lev, and David Konopnicki. 2018. Word emphasis prediction for expressive text to speech. pages 2868–2872. Taniya Mishra, Vivek Rangarajan Sridhar, and Alistair Conkie. 2012. Word prominence detection using robust yet simple prosodic features. In Thirteenth Annual Conference of the International Speech Communication Association. Hideharu Nakajima, Hideyuki Mizuno, and Sumitaka Sakauchi. 2014. Emphasized accent phrase prediction from text for advertisement text-to-speech synthesis. In Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing. Naoaki Okazaki. 2007. Crfsuite: a fast implementation of conditional random fields (crfs). http://www.chokkan.org/software/crfsuite/. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Filipe Rodrigues, Francisco Pereira, and Bernardete Ribeiro. 2014. Sequence labeling with multiple annotators. Machine learning, 95(2):165–181. Filipe Rodrigues and Francisco C Pereira. 2018. Deep learning from crowds. In Thirty-Second AAAI Conference on Artificial Intelligence. Jared Rondeau and Marco Alvarez. 2018. Deep modeling of human age guesses for apparent age estimation. In 2018 International Joint Conference on Neural Networks (IJCNN), pages 01–08. IEEE. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700. Jie Yang, Thomas Drake, Andreas Damianou, and Yoelle Maarek. 2018. Leveraging crowdsourcing data for deep active learning an application: Learning intents in alexa. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pages 23–32. International World Wide Web Conferences Steering Committee. Qi Zhang, Yang Wang, Yeyun Gong, and Xuanjing Huang. 2016. Keyphrase extraction using deep recurrent neural networks on twitter. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 836–845. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D Manning. 2017. Positionaware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 35–45.
2019
112
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1173–1179 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1173 Abstract In this study, we propose a new multi-task learning approach for rumor detection and stance classification tasks. This neural network model has a shared layer and two task specific layers. We incorporate the user credibility information into the rumor detection layer, and we also apply attention mechanism in the rumor detection process. The attended information include not only the hidden states in the rumor detection layer, but also the hidden states from the stance detection layer. The experiments on two datasets show that our proposed model outperforms the state-of-the-art rumor detection approaches. 1 Introduction Social media platforms, such as Twitter, Reddit and Facebook, do not always pose authentic information. Rumors sometimes may spread quickly over these platforms, and they usually spread fear or hate. Therefore, rumor detection and verification has gained great interest recently. Social media platforms and government authorities are also taking great efforts to defeat the negative impacts of rumors. Rumor Detection: Rumor definition varies over different publications. The lack of consistency makes it difficult to do a head-tohead comparison between existing methods. In this paper, a rumor is defined as a statement whose truth value is true, unverified or false (Qazvinian et al., 2011). When a rumor’s veracity value is false, some studies call it “false rumor” or “fake news”. However, many previous studies give “fake news” a stricter definition: fake news is a news article published by a news outlet that is intentionally and verifiably false (Shu et al., 2017; Zubiaga et al., 2018). The focus of this study is rumor on social media, not fake news. There are also different definitions for rumor detection. In some studies, rumor detection is defined as determining if a story or online post is a rumor or non-rumor (i.e. a real story, a news article), and the task of determining the veracity of a rumor (true, false or unverified) is defined as rumor verification (Zubiaga et al., 2016; Kochkina et al., 2018). But in this paper, as well as in many previous studies (Ma et al., 2016; Shu et al, 2017), rumor detection is defined as determining the veracity value of a rumor. This means it is the same as rumor verification defined in some other studies. Rumor detection and rumor verification will be used interchangeably in this paper. Zubiaga et al. (2018a) consider the rumor resolution process as a pipeline involving four sub-tasks: (1) rumor identification, determining whether a claim is worth verifying rather than the expression of an opinion, i.e. checking a claim is rumor or non-rumor; (2) rumor tracking, collecting opinions on a rumor as it unfolds; (3) stance classification, determining the attitude of users towards the truthfulness of the rumor, and (4) rumor verification, the ultimate step where the veracity value of the rumor is predicted. This study involves the last two tasks: stance classification (detection) and rumor verification (i.e. rumor detection). And this paper mainly focuses on the final step, rumor detection. Problem Statement: Now we formally define the rumor detection problem: A story x is defined as a set of n pieces of related messages M = {m1, m2, …, mn}. m1 is the source message (post) that initiated the message chain, which could be a tree-structure having multiple branches. For each message mi, it has attributes representing its content, such as text and image. Each message is also associated with a user who posted it. The user also has a set of attributes, including name, description, avatar image, past posts, etc. The rumor detection task is then defined as follow: Given a story x with its message set M and user set U, the rumor Rumor Detection By Exploiting User Credibility Information, Attention and Multi-task Learning Quanzhi Li, Qiong Zhang, Luo Si Alibaba Group, US Bellevue, WA, USA {quanzhi.li, qz.zhang, luo.si}@alibaba-inc.com 1174 detection task aims to determine whether this story is true, false or unverified (or just true or false for datasets having just two labels). This definition formulates the rumor detection task as a veracity classification task. The definition is the same as the definition used in many previous studies (Shu et al, 2017; Ma et al., 2016). There are four stance categories: supporting(S), denying(D), querying(Q) and commenting(C), i.e. SDQC. The veracity of a rumor has three values: true, false, or unverified. For both stance detection and rumor detection, traditional approaches used supervised learning algorithms incorporating a variety of features generated from post content, user profiles, and diffusion patterns (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015; Zhao et al., 2015). Recent studies have shown that the sequential time-sensitive approach has benefited both rumor detection and stance detection tasks (Ma et al., 2016; Kwon et al., 2017; Ma et al., 2017; Ma et al., 2018a; Kochkina et al., 2018). In this study, we also use the sequential classification approach on these two tasks. A rumor consists of a source post that makes a claim, and a set of replies, directly or indirectly towards the source post. This set of posts may have multiple conversation branches. Our model exploits the structural information of these conversations. Multi-task learning (Caruana, 1998; Liu et al., 2016) has been applied in many NLP tasks. In this study, we use a shared Long-Short Term Memory (LSTM) layer to learn a set of common features relevant to both tasks, while each task can also learn their task-specific features via their specific layer. Compared to previous studies (Ma et al., 2018; Kochkina et al., 2018) that also use multi-task learning for stance detection and rumor verification, the main differences between ours and them are: 1. We incorporate features that describe user credibility information into the rumor detection layer. User credibility information, which is derived from user profile in this study, is critical in rumor detection task, as already proven in Liu et al. (2015) and Castillo et al. (2011). But recent studies using sequential classification have not made use of it. To our knowledge, this is the first study that incorporates user credibility/profile information in neural network for sequential classification. 2. We apply attention mechanism in the rumor detection process. And the attention includes not only the hidden states in the rumor detection layer, but also the hidden states of the stance detection layer. In a conversation branch, some posts, especially the ones with strong stance, will be more important than others in determining the rumor veracity. No previous study has exploited this on rumor detection. Although stance detection is included in the multi-task learning network, in this study, we focus on the main task, rumor detection, so the experiments are conducted for evaluating the performance of rumor detection. Our experiments show that our approach outperforms the state-of-the-art methods. 2 Related Studies Many existing algorithms (Liu et al., 2015; Wu et al., 2015; Yang et al., 2012) for debunking rumors followed the work of Castillo et al. (2011). They studied information credibility and various features. Stance classification is also an active research area that has been studied in previous work (Ranade et al., 2013; Chuang and Hsieh, 2015; Lukasik et al., 2016; Zubiaga et al., 2016; Kochkina et al., 2017). Several studies have employed neural networks on rumor verification (Ma et al., 2016; Kochkina et al., 2017; Ma et al., 2017), and they mainly focus on analyzing the information propagation structure. Multi-task learning has been used in various NLP tasks, including rumor verification (Collobert et al., 2011; Aguilar et al., 2017; Lan et al., 2017; Ma et al., 2018a; Kochkina et al., 2018). Kochkina et al. (2018) proposed a multi-task method without task specific layer for rumor verification. MT-ES is a multi-task approach using Gated Recurrent Unit (GRU) (Cho et al., 2014) with a task specific layer for each task (Ma et al., 2018a). MT-ES has no attention mechanism, and it does not use user information. Ma et al. (2018b) proposed a model based on tree-structured recursive neural networks. 3 The Proposed Model 3.1 The Multi-task Network Structure Figure 1 presents the high-level structure of our proposed multi-task learning approach. The middle layer is a shared layer, shared by the two tasks. This layer is to extract the common patterns between these two tasks, via the shared 1175 parameters. The upper layer is for stance detection, and the lower layer is for rumor detection. These two layers will capture task Figure 1. The high-level structure of our proposed approach. The shared LSTM layer is in the middle (in the red dot-line rectangle). The upper layer is the stance detection specific layer, and the lower layer is for rumor verification task. specific features. In this figure, we assume the posts are tweets, and will use tweets as examples in the following sections. The input to the two task specific layers is a claim (rumor, thread) branch. Take the rumor propagation path in Figure 2 as an example, this rumor has four branches, and each branch has an input sequence [x1, x2, …, xn], fed into the two task specific layers. x1 is the source tweet (post), and xn is the last tweet in a branch. Tweet Embedding (TE): We generate the tweet embedding through an attention-based LSTM network. The word embeddings were built from 200 million tweets using the word2vec model (Mikolov et al., 2013; Li et al., 2017). Figure 2: A rumor propagation example. There are four branches in this rumor. 3.2 The Stance Detection Layer As shown in Figure 1, the stance detection layer uses a standard LSTM model. The input xi is a concatenation of two types of features: the tweet embedding (TE) and a tweet feature embedding (FE). FE is generated using the same list of features described in (Kochkina et al., 2017). Some FE feature examples are content length, presence of a URL, and if it is a source tweet or not. At each time step i, the hidden state hsi is fed to a fully connected hidden layer, and a softmax layer is used to predict the stance type (e.g. S, D, Q, C). These hidden states are also used in the attention step of the rumor verification task. 3.3 The Rumor Verification Layer The lower layer of Figure 1 shows the structure of the rumor verification process. At each step, the input xi is represented by two vectors, tweet embedding (TE) and user information embedding (UE). UE is to represent user credibility information. User Credibility Information: Many previous studies have shown that user credibility information is very important in rumor verification (Li et al., 2016; Liu et al., 2015). This is especially true when a rumor is debunked or supported by a credible user, such as a verified user, news agent, government agent, or a professional in the area of the rumor topic. But recent studies using sequential classification and 1176 neural network have not made use of this information. We hypothesize that this information will improve rumor verification performance. In this study, we derive the credibility information from user profile. We use the features described in (Liu et al., 2015) to derive this information. Some feature examples are: is verified account, if profile includes location, if profile has description, etc. These information are processed and concatenated together as the UE embedding, and then UE is concatenated with TE as input. Attention-based LSTM: In a conversation branch, different posts will have different impacts on the rumor veracity. For example, the tweets with strong support or deny stance should have more impact for predicting rumor veracity. In order to better exploit the stance information, we explicitly include the hidden states from the stance layer in the attention calculation. Besides the tweets with strong stance, we should also pay more attention to the credible users. This can be done through attention in the rumor-specific layer, since it has already encoded the user credibility information through UE embedding. Therefore, we use an attention-based LSTM to give more attention to the important tweets. At each step i, the hidden state from the upper layer and the state from the lower layer are actually concatenated and attended together. In other words, they use the same attention weight, i. Vectors in sequence hRi and hSi are fed into a learnable function a(hRi, hSi) to generate a probability vector ai . The vector R is then computed as a weighted average of (hRi, hSi), with weighting given by ai:  (1) The hidden state R is fed into a fully connected layer, and softmax is used for veracity prediction. 4 Experiments and Results Datasets: Two publicly available rumor datasets are used: RumorEval (Derczynski et al., 2017) and PHEME (Zubiaga et al., 2016; Zubiaga et al., 2017). RumorEval was released as part of the SemEval-2017 Task 8 competition (Derczynski et al., 2017). It contains 325 rumors (4017 branches) from Twitter. Each tweet is also labeled with a stance. The PHEME dataset has 1,972 rumors. But its tweets have no stance label. To get their stance labels for the multi-task learning, following (Kochkina et al., 2018), we also used the stance detection algorithm described in (Kochkina et al., 2017) to automatically annotate these tweets. The RumorEval dataset was provided with a training/development/testing split. For PHEME dataset, we use cross validation, same as (Kochkina et al., 2018). Accuracy and Macro F1 are used as the evaluation metrics. Regarding the stance annotation of the RumorEval data set (Derczynski et al., 2017), as the task description paper already pointed out: the overall inter-annotator agreement rate of 63.7% showed the task to be challenging, and easier for source tweets (81.1%) than for replying tweets (62.2%). This means that there are many conflicting or inconsistent stance labels. When we analyzed the training data set, we found many such examples. To make the labels more consistent, we run an analysis to find the posts that are basically the same or highly similar, but their labels are different. We then mark these posts, and use the same label, the one labeled on the majority of these posts, on them during training. The similarity between two posts is calculated by cosine similarity measure. The similarity threshold for being considered as similar posts is empirically set as 0.75. Compared Methods: We compare our proposed model with the following approaches, including the state-of-the-art algorithms: Majority vote: this is a strong baseline which results in high accuracy due to the class imbalance in the veracity classification task. NileTMRG: this is the best veracity prediction system from SemEval-2017 Task 8 (Enayet and El-Beltagy, 2017). It is based on a linear SVM using a bag-of-words representation of the tweet concatenated with selected features. BranchLSTM: a method based on an LSTM layer followed by several dense ReLU layers and a softmax layer (Zubiaga et al., 2018b). MTL2: a multi-task method without task specific layers (Kochkina et al., 2018). Method Accuracy Macro F1 Majority(False) 0.438 0.304 NileTMRG 0.57 0.539 BranchLSTM 0.5 0.491 MTL2 0.571 0.558 Proposed model 0.638 0.606 1177 Table 1: Rumor verification result on RumorEval Ma et al. (2018a) proposed a multi-task approach using GRU, with a task specific layer for each task. It has no attention mechanism, and does not use user information. Our implementation of their approach did not achieve the performance reported in their paper using the data sets they used, so we do not compare our method to theirs here. Ma et al. (2018b) proposed a model based on tree-structured recursive neural networks . We did not include this model in our experiments, because it uses recursive network and it performs not well on datasets without long propagation path, which is the case for our datasets. Experimental Settings: Our model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth, same as (Ma et al., 2018a). Stochastic gradient descent, shuffled mini-batch, AdaDelta update, back-propagation and dropout are used in the training process. The TE size is 300. During training, for each branch, the stance task is first executed, followed by the rumor verification task, in order for the verification task to utilize the hidden states of the stance detection layer in its attention step. Zero-padding and masks are used for handling the varying lengths of the input branches; they are also used in (Kochkina et al., 2017; Ma et al., 2018a). A rumor’s final veracity is based on the voting result of all its branches. Method Accuracy Macro F1 Majority (True) 0.511 0.226 NileTMRG 0.438 0.339 BranchLSTM 0.454 0.336 MTL2 0.441 0.376 Proposed model 0.483 0.418 Table 2: Rumor verification result on PHEME dataset Results: Table 1 shows the result on RumorEval dataset, and Table 2 is for the PHEME dataset. We can see that our proposed method outperforms other approaches on both datasets. In both cases, the performance improvement is statistically significant at the level of p=0.01 for both accuracy and F1, using t-test (Rice, 2006). Compared to other multi-task models, our model has three main features: 1. it incorporates user credibility information in the rumor verification task, 2. it uses attention mechanism to pay more attention to the important tweets, and 3. it integrates the stance information into the attention computation. 5 Conclusion We proposed a multi-task learning approach for rumor detection and stance classification tasks. This model incorporates the user credibility information into the rumor detection layer, and uses attention mechanism in the rumor detection process. The experiments on two datasets show that our proposed model outperforms the state-ofthe-art rumor detection approaches. References Gustavo Aguilar, Suraj Maharjan, Adrian Pastor L´opez Monroy, and Thamar Solorio. 2017. A multi-task approach for named entity recognition in social media data. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 148–153. Carlos Castillo, M. Mendoza, and B. Poblete. Information credibility on twitter. WWW 2011. Rich Caruana. 1998. Multitask learning. In Learning to learn. Springer, 95–133. Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoderdecoder approaches. arXiv preprint arXiv:1409.1259 (2014). Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537, 2011. Ju-han Chuang and Shukai Hsieh. Stance classification on post comments. In Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation. 2015 Leon Derczynski, Kalina Bontcheva, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Arkaitz Zubiaga. 2017. Semeval-2017 task 8: Rumoureval: Determining rumour veracity and support for rumours. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval2017), pages 69–76. Omar Enayet and Samhaa R El-Beltagy. 2017. Niletmrg at semeval-2017 task 8: Determining rumour and veracity support for rumours on twitter. SemEval-2017. Genevieve Gorrell, Kalina Bontcheva, Leon Derczynski, Elena Kochkina, Maria Liakata, and 1178 Arkaitz Zubiaga, RumourEval 2019: Determining Rumour Veracity and Support for Rumours. SemEval 2019 Gupta, H. Lamba, P. Kumaraguru, and A. Joshi. Faking sandy: characterizing and identifying fake images on twitter during hurricane sandy. WWW 2013 Elena Kochkina, Maria Liakata, Isabelle Augenstein, 2017, Turing at SemEval-2017 Task 8: Sequential Approach to Rumour Stance Classification with Branch-LSTM, SemEval 2017 Elena Kochkina, Maria Liakata, Arkaitz Zubiaga, Allin-one: Multi-task Learning for Rumour Verification, COLING 2018 Sejeong Kwon, Meeyoung Cha, Kyomin Jung, Wei Chen, and Yajun Wang. 2013. Prominent features of rumor propagation in online social media. ICDM. Man Lan, Jianxiang Wang, Yuanbin Wu, Zheng-Yu Niu, and Haifeng Wang. 2017. Multi-task attention-based neural networks for implicit discourse relationship representation and identification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1299–1308. Quanzhi Li, Xiaomo Liu, Rui Fang, Armineh Nourbakhsh, Sameena Shah, 2016, User Behaviors in Newsworthy Rumors: A Case Study of Twitter. The 10th International AAAI Conference on Web and Social Media (ICWSM 2016) Quanzhi Li, Sameena Shah, Xiaomo Liu, Armineh Nourbakhsh, 2017, Data Set: Word Embeddings Learned from Tweets and General Data, The 11th International AAAI Conference on Web and Social Media (ICWSM 2017). Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Recurrent neural network for text classification with multi-task learning. IJCAI 2016 Xiaomo Liu, Armineh Nourbakhsh, Quanzhi Li, Rui Fang, Sameena Shah, 2015, Real-time Rumor Debunking on Twitter, CIKM 2015. Michal Lukasik, P. K. Srijith, Duy Vu, Kalina Bontcheva, Arkaitz Zubiaga, and Trevor Cohn. 2016. Hawkes processes for continuous time sequence classification: an application to rumour stance classification in twitter. ACL 2016 Jing Ma, Wei Gao, Zhongyu Wei, Yueming Lu, and Kam-Fai Wong. 2015. Detect Rumors Using Time Series of Social Context Information on Microblogging Websites. In Proceedings of CIKM. Jing Ma, Wei Gao, Prasenjit Mitra, Sejeong Kwon, Bernard J Jansen, Kam-Fai Wong, and Meeyoung Cha. 2016. Detecting rumors from microblogs with recurrent neural networks. In Proceedings of IJCAI. Jing Ma, Wei Gao, Kam-Fai Wong, 2017, Detect rumors in microblog posts using propagation structure via kernel learning, ACL 2017 Jing Ma, Wei Gao, Kam-Fai Wong, Detect Rumor and Stance Jointly by Neural Multi-task Learning, WWW 2018 Jing Ma, Wei Gao, Kam-Fai Wong, Rumor Detection on Twitter with Tree-structured Recursive Neural Networks, ACL 2018 M. Mendoza, B. Poblete, and C. Castillo. Twitter under crisis: Can we trust what we rt? In Proc. First Workshop on Social Media Analytics, 2010. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of NIPS, 2013. Saif M Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016. Semeval-2016 task 6: Detecting stance in tweets. SemEval 2016. K. Popat, S. Mukherjee, A. Yates, G. Weikum: DeClarE: Debunking Fake News and False Claims using Evidence-Aware Deep Learning, in Proceedings of EMNLP, 2018. V. Qazvinian, E. Rosengren, D. R. Radev, and Q. Mei. Rumor has it: Identifying misinformation in microblogs. EMNLP 2011. Sarvesh Ranade, Rajeev Sangal, and Radhika Mamidi. 2013. Stance classification in online debates by recognizing users’ intentions. SIGDIAL 2013. John A. Rice. 2006. Mathematical Statistics and Data Analysis, Third Edition, Duxbury Advanced Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social media: A data mining perspective. SIGKDD Explorations Newsletter K. Wu, S. Yang, and K. Q. Zhu. False rumors detection on sina weibo by propagation structures. IEEE ICDE 2015. Fan Yang, Yang Liu, Xiaohui Yu, and Min Yang. 2012. Automatic detection of rumor on sina weibo. ACM SIGKDD Workshop on Mining Data Semantics. Zhe Zhao, Paul Resnick, and Qiaozhu Mei. 2015. Enquiring Minds: Early Detection of Rumors in Social Media from Enquiry Posts. WWW 2015 Arkaitz Zubiaga, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Peter Tolmie. 2016. 1179 Analysing how people orient to and spread rumours in social media by looking at conversational threads. PloS one 11(3):e0150989. Arkaitz Zubiaga, Maria Liakata, and Rob Procter. 2017. Exploiting context for rumour detection in social media. In International Conference on Social Informatics, pages 109–123. Springer. Arkaitz Zubiaga, Ahmet Aker, Kalina Bontcheva, Maria Liakata, and Rob Procter. 2018a. Detection and resolution of rumours in social media: A survey. ACM Comput. Survey. Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, Michal Lukasik, Kalina Bontcheva, Trevor Cohn, and Isabelle Augenstein. 2018b. Discourse-aware rumour stance classification in social media using sequential classifiers. IPM
2019
113
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1180–1184 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1180 Context-specific language modeling for human trafficking detection from online advertisements Saeideh Shahrokh Esfahani Accenture Technology Labs San Francisco, CA [email protected] Michael J. Cafarella Department of Computer Science University of Michigan [email protected] Maziyar Baran Pouyan Accenture Technology Labs San Francisco, CA maziyar.baran.pouyan@accenture Gregory J. DeAngelo Department of Economics Claremont Graduate University [email protected] Elena Eneva Accenture Technology Labs San Francisco, CA [email protected] Andrew E. Fano Accenture Technology Labs San Francisco, CA [email protected] Abstract Human trafficking is a worldwide crisis. Traffickers exploit their victims by anonymously offering sexual services through online advertisements. These ads often contain clues that law enforcement can use to separate out potential trafficking cases from volunteer sex advertisements. The problem is that the sheer volume of ads is too overwhelming for manual processing. Ideally, a centralized semiautomated tool can be used to assist law enforcement agencies with this task. Here, we present an approach using natural language processing to identify trafficking ads on these websites. We propose a classifier by integrating multiple text feature sets, including the publicly available pre-trained textual language model Bi-directional Encoder Representation from transformers (BERT). In this paper, we demonstrate that a classifier using this composite feature set has significantly better performance compared to any single feature set alone. 1 Introduction In 2013, the Global Slavery Index reported that 30 million individuals were living in involuntary servitude. Another estimation found that 600,000 women are trafficked in the sex industry per year with the United States being the second most popular destination for these individuals (Kara, 2009); (Schauer and Wheaton, 2006). In the last decade, it has become more difficult for law enforcement (LE) to trace traffickers as they have begun to take increasing advantage of online advertisement platforms for sexual services to solicit clients and become less visible. LE is capable of tracking the posted ads and mining such data to detect trafficking victims. However, the large volume of online unstructured data, the high degree of similarity of ads (Figure 1), and the lack of an automated approach in detecting suspicious activities through advertisements present obstacles for LE to independently develop methods for surveying these criminal activities. Sex trafficking advertisements are unique texts. They have incorrect grammatical structures and misspellings, and are enriched with unconventional words, abbreviations, and emojis. Oftentimes the author uses emojis and emoticons to convey messages to a potential customer. In particular these types of advertisements may also contain equivocal words, e.g., roses as a substitute for dollars. Additionally, dominant keywords from these online ads continuously evolve as traffickers and consenting sex workers alike seek to evade prosecution. While previous researchers have tried to develop automated systems to detect trafficking advertisements, this has proved an enormous challenge for natural language processing and machine learning. In (Whitney et al., 2018), Whitney and colleagues propose to track the use of emojis and their significance in online sex ads as a potential indicator of trafficking. This team processed emojis to determine the meaning of them used 1181 (a) Close your eyes and imagine sliding into a warm flowing river of relaxation as I slowly pull and push your worries away. I want you here with me. Satisfy my need to please you now. Call Lisa xxx-xxxx-xxxx (A) (b) Hi gentlemen, Meet xxxx beauty Annie, She is 5\'8, very slim, honey blonde hair, gorgeous long legs. Very sexy, friendly and engaging. Call xxx-xxxx-xxxx to schedule your visit. Xo Xo, See u soon (B) Figure 1: Two examples of online sex ads describing (a) a trafficking victim and (b) a non-trafficked provider, selected from our labeled ads. in a sample of online ads, as indicated by interviews with law enforcement officials and individuals combating human trafficking. Taking a different approach, Tong, Zadeh, and colleagues (Tong et al., 2017) collaborated with LE officials and annotated 10,000 ads. With these annotated texts, they proposed the use of deep multimodal models to reach the accuracy of LE officials in identifying suspicious ads. Szekely and colleagues (Szekely et al., 2015) created a large generic knowledge graph from a large database of online sexual ads that allows for visualization and querying data. In this paper, we present part of an ongoing project. Unlike previous studies, we tested our method on a relatively large number of ads labeled based on the corresponding phone number rather than human interpretation of the text itself. In the following sections, we propose a method relying on extracting feature sets from ads to quantify their context. We later use these feature sets in several predictive models to flag suspicious ads. We also investigate the performance of a newly released pre-trained language model called the Bidirectional Encoder Representation from Transformers (BERT) (Devlin et al., 2018) to assess its power in analyzing this type of unstructured data. 2 Advertisement Annotation We created a dataset of advertisement texts by crawling thousands of ads extracted from various adult websites in 2017. We then performed our analysis to a subset, only including the data from January, February and March of 2017. In order to annotate the ads in our dataset, we further extracted phone numbers from these ads leading to a set of more than 3 million distinct phone numbers. We then used a database consisting of phone numbers associated with trafficking victims, constructed in conjunction with human trafficking domain experts without direct reference to the advertising texts. Afterwards, we created a labeled data set by finding phone numbers that appear in both sets. The overlapping set contains 6,387 phone numbers, which we used to label as trafficking ads (i.e., the positive label in our precision/recall analysis). We limited our analysis to two websites, Backpage and Eroticmugshots, with 4385 ads. We selected non-trafficking’s ad examples by randomly sub-sampling from the remaining ads (i.e. not labeled as trafficking) and treated them as negative examples to make a balanced 10K dataset. We assumed a very low prevalence of trafficking ads (less than 5%) in our initial set (≈3 million phones). We discuss this decision later in the paper. After choosing approximately 10K ads, we investigated the basic characteristics of the two labels. The median lengths of ads, including white spaces, are 538 and 401 for positive and negative labels, respectively. After excluding stop-words and lemmatizing the words, we found 24,000 distinct uni-grams in non-trafficking ads, and 9,662 distinct unigrams in the trafficking ads. It should be noted that lemmatizing was only done for calculating the statistics in this section. 3 Text Featurization In the feature extraction step, the fundamental challenge is to quantify the textual context while retrieving information from unconventional words, abbreviations and equivocals. Here, we revisit different developed feature sets that eventually lead us to our desired contextual model. 3.1 Topic Modeling Via LDA Our hypothesis is that language patterns, including topics and word usages, can aid in discerning the ads of trafficking victims from those of nonvictims. That being said, independent or voluntary sex providers vary in their use of words, context, and topics. To test this hypothesis, we use a Latent Dirichlet Allocation (LDA) model (Blei et al., 2003). Our vision was that clustering the words, 1182 with the use of LDA to enhance the featurization, would allow us to identify the performance of words in specialized textual contexts. LDA model assigns a score based on the importance of representation of the words within each topic. Therefore, the value of assigned scores to topics indicates which ones dominate throughout the text and create the feature set as si = [si1, . . . , sik], where si is the i-th feature vector for document i containing k scores. 3.2 Average Word Vector We choose to use word embedding as a key part of our model. Although Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) word embeddings have shown promising results in semantic vector representations of words, when we used these models on our texts we found that they missed many of the novel word uses and abbreviations. Instead, we chose to use FastText (Bojanowski et al., 2017) for our semantic word representation, as it is based on character-level word embedding and the word representation is the sum of vectors. With that said, we hereby define the second feature set for each text as νi = P j νi,j n , where n is the number of words in the text i, νi,j is the vector representation of j-th word of language model with dimension of pν (here set to 100 based on experiment). 3.3 Pre-trained BERT Thus far, we have defined features which need to be trained using the ads we already had. As our next features set, we propose to use a pre-trained model. Since we believe pre-trained word embedding on general domain is not able to capture all the rare, equivocal, and abbreviated words and phrases in our sexual advertisement text (Tong et al., 2017), we are motivated in finding the most comprehensive deep learning model and chose to assess the newly released Bidirectional Encoder Representation from Transformers (BERT) (Devlin et al., 2018). A word representation using BERT is made by using its bidirectional, i.e., left and right, context. BERT is released with two model sizes: (1) BERTBASE with 12 layers, 768 hidden layers and 12 self-attention heads, and (2) BERTLARGE with 24 layers, 1024 hidden layers, and 16 self-attention heads. One should note that in this study we do not use fine-tuned BERT model to examine the true power of BERT. Here, we choose to use the pre-trained BERTBASE model which encodes our document to a vector representation of size 768 for each document i and denote that by bi = [bi1, . . . , bi768]. 3.4 Integrating LDA, AWV and BERT Finally, we propose a new feature set consisting of the three types of features explained above. The rationale behind this composite feature set is to allow for the use of textual context as well as the simpler features. Therefore, we have the final feature vector defined as as xi = [si, νi, bi], with the dimension of p = k + pν + 768. 4 Experiments In our study, we employ the feature models described above and compare the results of the binary classification corresponding to them. We use logistic regression and compute the precision and recall curve (PRC) to evaluate the performance of different models. Moreover, in this application, it is important to have a model with good recall while keeping high precision, i.e., a high positive predictive value (PPV) to avoid unnecessary actions. To do so, we investigate the sensitivity of models in different high PPVs. Pre-processing. We choose to not remove stop words or not use any stemming or lemmatization techniques as we are faced with different writing structures which could be informative for our model. We test the impact of emojis and punctuation by training and testing our model by creating two text sets. In the first text set, we keep the emojis and punctuation and remove them in the second set. In the second set, we convert the emojis to words. Numbers in the texts are removed, because: 1) we have made the labels based on phone numbers and 2), the ads are likely to have the same age or same price throughout the texts. We then divide the data into an 80/20% training/testing set. In the following sections, we describe how each set of features is processed while using logistic regression as our fixed classification model. LDA Features. We begin with features coming from LDA topic modeling scores where we assign it to 12 topics. Gensim LDA is implemented by making a bag of words dictionary of our training set. We find this optimal topic number where we examined the explained LDA feature set via crossvalidation on January 2017 alone. 1183 AWV Features. Our FastText model is trained on a set including a minimum count of 2 words and a window size of 3 to give us a vector of dimension 100. After training the FastText model, the average word vector of the training set is computed. Using this saved language model from the training set, we compute the feature test vectors. BERT Features. For encoding our texts using BERT, we make a list of all documents and use the BERT service client. We use the weights of the words that BERTBASE learned in its pretraining to encode each document to a vector of size 768 for both the training and testing sets. We examine encoding texts with both Cased BERT (C-BASED) and Uncased BERT (U-BERT). In the U-BERT, the text has been lower cased, whereas in C-BERT, the true case and accent markers are preserved. Full Features. In this final step in featurization towards our composite model, we combine all three types of features to build a unified feature set, i.e. combining LDA, AWV and BERT. 5 Results and Discussions Figure 2 depicts the results of the classifications of the different feature sets. It can be seen that both classification approaches based on LDA and the average word vector features achieve similarly average precision scores (APS). Based on our analysis, keeping the entire text or removing emojis and punctuation do not significantly impact the results. From Figure 2, it can be seen that, despite small improvements, different featurizations provide similar APS values. However, focusing more on the early parts of the PRC, i.e., high precision, we can see that there is a significant improvement of recall. For example, as summarized in Figure 3, at 85% precision, our proposed full model (with U-BERT) achieves 69% and 67% sensitivity on pure text and text without emojis and punctuation, respectively. However, in the composite model with C-BERT, there is an opposite effect where recalls become 65% and 69% for the two scenarios, respectively. Comparing to the results of the classifiers with different feature sets (under U-BERT), the model utilizing the full feature set provides 26% recall improvement over the three individual ones, i.e. 69% vs 28%−42%, when precision is set to 85%. A similar observation holds for 90% precision. As a concluding remark, we should emphasize our (a) 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.0 0.2 0.4 0.6 0.8 1.0 Precision LDA (APS: 0.79) AWV (APS: 0.81) C-BERT (APS: 0.78) U-BERT (APS: 0.80) C-BERT+LDA+AWV (APS: 0.87) U-BERT+LDA+AWV (APS: 0.87) (b) 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.0 0.2 0.4 0.6 0.8 1.0 Precision LDA (APS: 0.81) AWV (APS: 0.80) C-BERT (APS: 0.82) U-BERT (APS: 0.80) C-BERT+LDA+AWV (APS: 0.88) U-BERT+LDA+AWV (APS: 0.86) Figure 2: Precision and Recall curves (PRCs) and their corresponding APS values: (a) pure text, (b) text without emojis and punctuation. significant improvement in recall rate over each individual model. 6 Conclusions and Future Work In this paper, we introduced different models based on different text featurizations where the main goal was to engineer features that allowed for understanding the context of sexual ads and remove the restriction of using keywords. We have proposed a composite model and compared its performance with other simpler models. For more evaluation, we examined the recall rate of models in 85% and 90% of precision. The full feature set, i.e. LDA+AWV+BERT, outperformed others as it indicated that having comprehensive features may be conveying more information about the advertisements. Thus, we can significantly increase the PPV of our model while maintaining a high recall rate. It also should be noted that our non-trafficking examples may still contain some trafficking ads. We thus note with caution that the false positives in our model may not be truly false. Given that, in 1184 (a) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Recall 0.21 0.24 0.11 0.18 0.48 0.46 0.42 0.30 0.28 0.42 0.65 0.69 AWV LDA C-BERT U-BERT LDA+AWV+C-BERT LDA+AWV+U-BERT (b) 90% 85% 90% 85% 90% 85% 90% 85% 90% 85% 90% 85% Precision 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Recall 0.19 0.34 0.34 0.30 0.52 0.44 0.38 0.44 0.50 0.43 0.69 0.67 Figure 3: Recall rates corresponding to 90% and 85% precision: (a) pure text, (b) text without emojis and punctuation. our future work, we will be investigating those false positive cases with our collaborators to assess what the correct label for these ads should be. Moreover, since the proposed full feature set involves hundreds of features we plan to increase our sample size to have a better estimation of the performance of our final predictor. We also envision that by including other underlying components from these advertisements, we can assist law enforcement officers with an automated framework to sift millions of sexual advertisements and spend time on especially suspicious activities. Finally, in this study, we tested our model on a balanced data set. However, in the real world, the number of trafficking ads is always far lower than the number of non-trafficking ones. After collecting more labeled data, and tuning our model using anomaly detection techniques like Isolation Forests (Liu et al., 2008), we hope to expand this study to the stage where we are able to use unbalanced data sets. Acknowledgments This study was supported by Accenture Labs. We would like to thank Jana Thompson for critical feedback on the manuscript. References David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3(Jan):993–1022. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Siddharth Kara. 2009. Sex trafficking: Inside the business of modern slavery. Columbia University Press. Fei Tony Liu, Kai Ming Ting, and Zhi-Hua Zhou. 2008. Isolation forest. In 2008 Eighth IEEE International Conference on Data Mining, pages 413–422. IEEE. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Edward J Schauer and Elizabeth M Wheaton. 2006. Sex trafficking into the united states: A literature review. Criminal Justice Review, 31(2):146–169. Pedro Szekely, Craig A Knoblock, Jason Slepicka, Andrew Philpot, Amandeep Singh, Chengye Yin, Dipsy Kapoor, Prem Natarajan, Daniel Marcu, Kevin Knight, et al. 2015. Building and using a knowledge graph to combat human trafficking. In International Semantic Web Conference, pages 205– 221. Springer. Edmund Tong, Amir Zadeh, Cara Jones, and LouisPhilippe Morency. 2017. Combating human trafficking with deep multimodal models. arXiv preprint arXiv:1705.02735. Jessica Whitney, Murray Jennex, Aaron Elkins, and Eric Frost. 2018. Don’t want to get caught? don’t say it: The use of emojis in online human sex trafficking ads.
2019
114
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1185–1197 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1185 Self-Attentional Models for Lattice Inputs Matthias Sperber1, Graham Neubig2, Ngoc-Quan Pham1, Alex Waibel1,2 1Karlsruhe Institute of Technology, Germany 2Carnegie Mellon University, USA {first}.{last}@kit.edu, [email protected] Abstract Lattices are an efficient and effective method to encode ambiguity of upstream systems in natural language processing tasks, for example to compactly capture multiple speech recognition hypotheses, or to represent multiple linguistic analyses. Previous work has extended recurrent neural networks to model lattice inputs and achieved improvements in various tasks, but these models suffer from very slow computation speeds. This paper extends the recently proposed paradigm of self-attention to handle lattice inputs. Self-attention is a sequence modeling technique that relates inputs to one another by computing pairwise similarities and has gained popularity for both its strong results and its computational efficiency. To extend such models to handle lattices, we introduce probabilistic reachability masks that incorporate lattice structure into the model and support lattice scores if available. We also propose a method for adapting positional embeddings to lattice structures. We apply the proposed model to a speech translation task and find that it outperforms all examined baselines while being much faster to compute than previous neural lattice models during both training and inference. 1 Introduction In many natural language processing tasks, graphbased representations have proven useful tools to enable models to deal with highly structured knowledge. Lattices are a common instance of graph-based representations that allows capturing a large number of alternative sequences in a compact form (Figure 1). Example applications include speech recognition lattices that represent alternative decoding choices (Saleem et al., 2004; Zhang et al., 2005; Matusov et al., 2008), word segmentation lattices that capture ambiguous decisions on word boundaries or morphological alternatives (Dyer et al., 2008), word class lattices f a S c E d b e g 1 0.4 1 1 1 0.6 0.2 0.8 1 e a S E c b d 1 0.4 1 1 0.6 0.2 0.8 e a S c b d 1 0.45 0.88 1 0.55 0.12 1 Figure 1: Example of a node-labeled lattice. Nodes are labeled with word tokens and posterior scores. (Navigli and Velardi, 2010), and lattices for alternative video descriptions (Senina et al., 2014). Prior work has made it possible to handle these through the use of recurrent neural network (RNN) lattice representations (Ladhak et al., 2016; Su et al., 2017; Sperber et al., 2017), inspired by earlier works that extended RNNs to tree structures (Socher et al., 2013; Tai et al., 2015; Zhu et al., 2015). Unfortunately, these models are computationally expensive, because the extension of the already slow RNNs to tree-structured inputs prevents convenient use of batched computation. An alternative model, graph convolutional networks (GCN) (Duvenaud et al., 2015; Defferrard et al., 2016; Kearnes et al., 2016; Kipf and Welling, 2017), is much faster but considers only local context and therefore requires combination with slower RNN layers for typical natural language processing tasks (Bastings et al., 2017; Cetoli et al., 2017; Vashishth et al., 2018). For linear sequence modeling, self-attention (Cheng et al., 2016; Parikh et al., 2016; Lin et al., 2017; Vaswani et al., 2017) now provides an alternative to RNNs. Self-attention encodes sequences by relating sequence items to one another through computation of pairwise similarity, with addition of positional encoding to model positions of words in a linear sequence. Self-attention has gained popularity thanks to strong empirical results and computational efficiency afforded by paralleliz1186 able computations across sequence positions. In this paper, we extend the previously purely sequential self-attentional models to lattice inputs. Our primary goal is to obtain additional modeling flexibility while avoiding the increased cost of previous lattice-RNN-based methods. Our technical contributions are two-fold: First, we incorporate the global lattice structure into the model through reachability masks that mimic the pairwise conditioning structure of previous recurrent approaches. These masks can account for lattice scores if available. Second, we propose the use of lattice positional embeddings to model positioning and ordering of lattice nodes. We evaluate our method on two standard speech translation benchmarks, replacing the encoder component of an attentional encoder-decoder model with our proposed lattice self-attentional encoder. Results show that the proposed model outperforms all tested baselines, including LSTMbased and self-attentional sequential encoders, a LatticeLSTM encoder, and a recently proposed self-attentional model that is able to handle graphs but only considers local context, similar to GCNs. The proposed model performs well without support from RNNs and offers computational advantages in both training and inference settings. 2 Background 2.1 Masked Self-Attention We start by introducing self-attentional models for sequential inputs, which we will extend to latticestructured inputs in § 4. Attentional models in general can be described using the terminology of queries, keys, and values. The input is a sequence of l values, along with a key corresponding to each value. For some given query, the model computes how closely each key matches the query. Here, we assume values, keys, and queries vk, kk, q∈Rd, for some dimensionality d and sequence indices k∈{1 . . . l}. Using the computed similarity scores f(q, kk), attention computes a weighted average of the values to obtain a fixed-size representation of the whole sequence conditioned on this query. In the selfattentional case, the sequence items themselves are used as queries, yielding a new sequence of same length as output in which each of the original input elements has been enriched by the respectively relevant global context. The following equations formalize this idea. We are given a sequence of input vectors xk ∈Rd. For every query index i, we compute an output vector yi as: eij = f (q (xi) , k (xj)) +mij (∀1≤j≤l) (1) αi = softmax (ei) (2) yi = l X j=1 αijv (xj) . (3) Here, unnormalized pairwise similarities eij are computed through the similarity function f, and then normalized as αij for computation of a weighted sum of value vectors. q, k, v denote parametrized transformations (e.g. affine) of the inputs into queries, keys, and values. Equation 1 also adds an attention masking term mij ∈R that allows adjusting or disabling the influence of context at key position j on the output representation at query position i. Masks have, for example, been used to restrict self-attention to ignore future decoder context (Vaswani et al., 2017) by setting mij = −∞for all j>i. We will use this concept in § 4.1 to model reachability structure. 2.2 Lattices We aim to design models for lattice inputs that store a large number of sequences in a compact data structure, as illustrated in Figure 1. We define lattices as directed acyclic graphs (DAGs) with the additional property that there is exactly one start node (S) and one end node (E). We call the sequences contained in the lattice complete paths, running from the start node to the end node. Each node is labeled with a word token.1 To make matters precise, let G=(V, E) be a DAG with nodes V and edges E. For k∈V , let R+ G(k) denote all successors (reachable nodes) of node k, and let N+ G(k) denote the neighborhood, defined as the set of all adjacent successor nodes. R– G(k), N– G(k) are defined analogously for predecessors. j ≻i indicates that node j is a successor of node i. For arbitrary nodes i, j, let pG (j ≻i | i) be the probability that a complete path in G contains j as a successor of i, given that i is contained in the path. Note that j /∈R+ G(i) implies pG (j ≻i | i) =0. The probability structure 1Edge-labeled lattices can be easily converted to nodelabeled lattices using the line-graph algorithm (Hemminger and Beineke, 1978). 1187 of the whole lattice can be represented through transition probabilities ptrans k,j :=pG (k ≻j | j) for j ∈N+ G(k). We drop the subscript G when clear from context. 3 Baseline Model Our proposed model builds on established architectures from prior work, described in this section. 3.1 Lattice-Biased Attentional Decoder The common attentional encoder-decoder model (Bahdanau et al., 2015) serves as our starting point. The encoder will be described in § 4. As cross-attention mechanism, we use the latticebiased variant (Sperber et al., 2017), which adjusts the attention scores αcross ij between encoder position j and decoder position i according to marginal lattice scores p (j ≻S | S) (§ 4.1.2 describes how to compute these) as follows:2 αcross ij ∝exp (score(•) + log p (j ≻S | S)) . (4) Here, score(•) is the unnormalized attention score. In the decoder, we use long short-term memory (LSTM) networks, although it is straightforward to use alternative decoders in future work, such as the self-attentional decoder proposed by Vaswani et al. (2017). We further use input feeding (Luong et al., 2015), variational dropout in the decoder LSTM (Gal and Ghahramani, 2016), and label smoothing (Szegedy et al., 2016). 3.2 Multi-Head Transformer Layers To design our self-attentional encoder, we use Vaswani et al. (2017)’s Transformer layers that combine self-attention with position-wise feedforward connections, layer norm (Ba et al., 2016), and residual connections (He et al., 2016) to form deeper models. Self-attention is modeled with multiple heads, computing independent selfattentional representations for several separately parametrized attention heads, before concatenating the results to a single representation. This increases model expressiveness and allows using different masks (Equation 1) between different attention heads, a feature that we will exploit in § 4.1. Transformer layers are computed as follows: 2We have removed the trainable peakiness coefficient from the original formulation for simplicity and because gains of this additional parameter were unclear according to Sperber et al. (2017). Qk = XW(q) k , Kk=XW(k) k , Vk=XW(v) k (5) Hk = softmax dropout QiK⊤ k +M  √ d ! Vk (6) H = concat(H1, H2, . . . , Hn) (7) L = LN [dropout (H + X)] (8) Y = LN [dropout (FF (L) + L)] (9) Here, X∈Rl×d, Qk, Kk, Vk∈Rl×d/n denote inputs and their query-, key-, and value transformations for attention heads with index k∈{1, . . . , n}, sequence length l, and hidden dimension d. M∈Rl×l is an attention mask to be defined in § 4.1. Similarity between keys and queries is measured via the dot product. The inputs are word embeddings in the first layer, or the output of the previous layer in the case of stacked layers. Y∈Rl×d denotes the final output of the Transformer layer. W(q) k , W(k) k , W(v) k ∈ Rd×d/n are parameter matrices. FF is a positionwise feed-forward network intended to introduce additional depth and nonlinearities, defined as FF(x)= max (0, xW1 + b1) W2 + b2. LN denotes layer norm. Note that dropout regularization (Srivastava et al., 2014) is added in three places. Up to now, the model is completely agnostic of sequence positions. However, position information is crucial in natural language, so a mechanism to represent such information in the model is needed. A common approach is to add positional encodings to the word embeddings used as inputs to the first layer. We opt to use learned positional embeddings (Gehring et al., 2017), and obtain the following after applying dropout: x′ i = dropout (xi + embed [i]) . (10) Here, a position embedding embed [i] of equal dimension with sequence item xi at position i is added to the input. 4 Self-Attentional Lattice Encoders A simple way to realize self-attentional modeling for lattice inputs would be to linearize the lattice in topological order and then apply the above model. However, such a strategy would ignore the lattice structure and relate queries to keys that cannot possibly appear together according to the lattice. 1188 f E g 1 1 1 f b a d E e c f h f a S c i d b f h f E g 1 1 1 f a S c E d b e g 1 1 0.45 0.88 1 1 1 0.55 0.12 Figure 2: Example for binary masks in forward- and backward directions. The currently selected query is node f, and the mask prevents all solid black nodes from being attended to. We find empirically that this naive approach performs poorly (§ 5.4). As a remedy, we introduce a masking scheme to incorporate lattice structure into the model (§ 4.1), before addressing positional encoding for lattices (§ 4.2). 4.1 Lattice Reachability Masks We draw inspiration from prior works such as the TreeLSTM (Tai et al., 2015) and related works. Consider how the recurrent conditioning of hidden representations in these models is informed by the graph structure of the inputs: Each node is conditioned on its direct predecessors in the graph, and via recurrent modeling on all its predecessor nodes up to the root or leaf nodes. 4.1.1 Binary Masks We propose a masking strategy that results in the same conditioning among tokens based on the lattice structure, preventing the self-attentional model from attending to lattice nodes that are not reachable from some given query node i. Figure 2 illustrates the concept of such reachability masks. Formally, we obtain masks in forward and backward direction as follows: −→ mbin ij =  0 if i∈R– (j) ∨i=j −∞ else ←− mbin ij =  0 if i∈R+ (j) ∨i=j −∞ else The resulting conditioning structure is analogous to the conditioning in lattice RNNs (Ladhak et al., 2016) in the backward and forward directions, respectively. These masks can be obtained using standard graph traversal algorithms. 4.1.2 Probabilistic Masks Binary masks capture the graph structure of the inputs, but do not account for potentially available lattice scores that associate lattice nodes with a probability of being correct. Prior work has found e a S E c b d 1 0.4 1 1 0.6 0.2 0.8 e a S E c b d 1 0.45 0.88 1 1 0.55 0.12 1 1 → S a b c d e E S 1 0.4 0.6 0.48 0.12 0.88 1 a 0 1 0 1 0 1 1 b 0 0 1 0.8 0.2 0.8 1 c 0 0 0 1 0 1 1 d 0 0 0 0 1 0 1 e 0 0 0 0 0 1 1 E 0 0 0 0 0 0 1 ← S a b c d e E S 1 0 0 0 0 0 0 a 1 1 0 0 0 0 0 b 1 0 1 0 0 0 0 c 1 0 0 1 0 0 0 d 1 0 1 0 1 0 0 e 1 0.45 0.55 0.55 0 1 0 E 1 0.4 0.6 0.48 0.12 0.88 1 Figure 3: Example for pairwise conditional reaching probabilities for a given lattice, which we logarithmize to obtain self-attention masks. Rows are queries, columns are keys. it critical to exploit lattice scores, especially for noisy inputs such as speech recognition lattices (Sperber et al., 2017). In fact, the previous binary masks place equal weight on all nodes, which will cause the influence of low-confidence regions (i.e., dense regions with many alternative nodes) on computed representations to be greater than the influence of high-confidence regions (sparse regions with few alternative nodes). It is therefore desirable to make the selfattentional lattice model aware of these scores, so that it can place higher emphasis on confident context and lower emphasis on context with low confidence. The probabilistic masks below generalize binary masks according to this intuition: −→ mprob ij =  log pG (j ≻i | i) if i̸=j 0 if i=j ←− mprob ij =  log pG⊤(j ≻i | i) if i̸=j 0 if i=j Here, we set log(0):=−∞. Figure 3 illustrates the resulting pairwise probability matrix for a given lattice and its reverse, prior to applying the logarithm. Note that the first row in the forward matrix and the last row in the backward matrix are the globally normalized scores of Equation 4. Per our convention regarding log(0), the −∞ entries in the mask will occur at exactly the same 1189 Algorithm 1 Computation of logarithmized probabilistic masks via dynamic programming. – given: DAG G = (V, E); transition probs ptrans k,j 1: ∀i, j ∈V : qi,j ←0 2: for i ∈V do ▷loop over queries 3: qi,i ←1 4: for k ∈topologic-order (V ) do 5: for next ∈N+ (k) do 6: qi,next ←qi,next + ptrans k,next · qi,k 7: end for 8: end for 9: end for 10: ∀i, j ∈V : mprob ij ←log qi,j places as with the binary reachability mask, because the traversal probability is 0 for unreachable nodes. For reachable nodes, the probabilistic mask causes the computed similarity for low-confident nodes (keys) to be decreased, thus increasing the impact of confident nodes on the computed hidden representations. The proposed probabilistic masks are further justified by observing that the resulting model is invariant to path duplication (see Appendix A), unlike the model with binary masks. The introduced probabilistic masks can be computed in O |V |3 from the given transition probabilities by using the dynamic programming approach described in Algorithm 1. The backwarddirected probabilistic mask can be obtained by applying the same algorithm on the reversed graph. 4.1.3 Directional and Non-Directional Masks The above masks are designed to be plugged into each Transformer layer via the masking term M in Equation 6. However, note that we have defined two different masks, −→ mij and ←− mij. To employ both we can follow two strategies: (1) Merge both into a single, non-directional mask by using ←→ m ij = max {−→ mij, ←− mij}. (2) Use half of the attention heads in each multi-head Transformer layer (§ 3.2) with forward masks, the other half with backward masks, for a directional strategy. Note that when the input is a sequence (i.e., a lattice with only one complete path), the nondirectional strategy reduces to unmasked sequential self-attention. The second strategy, in contrast, reduces to the directional masks proposed by Shen et al. (2018) for sequence modeling. S a d e f g c E 0 1 2 3 2 2 4 5 b 1 Figure 4: Lattice positions, computed as longest-path distance from the start node S. 4.2 Lattice Positional Encoding Encoding positional information in the inputs is a crucial component in self-attentional architectures as explained in § 3.2. To devise a strategy to encode positions of lattice nodes in a suitable fashion, we state a number of desiderata: (1) Positions should be integers, so that positional embeddings (§ 3.2) can be used. (2) Every possible lattice path should be assigned strictly monotonically increasing positions, so that relative ordering can be inferred from positions. (3) For a compact representation, unnecessary jumps should be avoided. In particular, for at least one complete path the positions should increase by exactly 1 across all adjacent succeeding lattice nodes. A naive strategy would be to use a topological order of the nodes to encode positions, but this clearly violates the compactness desideratum. Dyer et al. (2008) used shortest-path distances between lattice nodes to account for distortion, but this violates monotonicity. Instead, we propose using the longest-path distance (ldist) from the start node, replacing Equation 10 with: x′ i = dropout (xi + embed [ldist (S →i)]) . This strategy fulfills all three desiderata, as illustrated in Figure 4. Longest-path distances from the start node to all other nodes can be computed in O |V |2 using e.g. Dijkstra’s shortest-path algorithm with edge weights set to −1. 4.3 Computational Complexity The computational complexity in the selfattentional encoder is dominated by generating the masks (O |V |3 ), or by the computation of pairwise similarities (O |V |2 ) if we assume that masks are precomputed prior to training. Our main baseline model, the LatticeLSTM, can be computed in O (|E|), where |E| ≤|V |2. Nevertheless, constant factors and the effect of batched operations lead to considerably faster computations for the self-attentional approach in practice (§ 5.3). 1190 5 Experiments We examine the effectiveness of our method on a speech translation task, in which we directly translate decoding lattices from a speech recognizer into a foreign language. 5.1 Settings We conduct experiments on the Fisher–Callhome Spanish–English Speech Translation corpus (Post et al., 2013). This corpus contains translated telephone conversations, along with speech recognition transcripts and lattices. The Fisher portion (138k training sentences) contains conversations between strangers, and the smaller Callhome portion (15k sentences) contains conversations between family members. Both and especially the latter are acoustically challenging, indicated by speech recognition word error rates of 36.4% and 65.3% on respective test sets for the transcripts contained in the corpus. The included lattices have oracle word error rates of 16.1% and 37.9%. We use XNMT (Neubig et al., 2018) which is based on DyNet (Neubig et al., 2017a), with the provided self-attention example as a starting point.3 Hidden dimensions are set to 512 unless otherwise noted. We use a single-layer LSTMbased decoder with dropout rate 0.5. All selfattentional encoders use three layers with hidden dimension of the FF operation set to 2048, and dropout rate set to 0.1. LSTM-based encoders use 2 layers. We follow Sperber et al. (2017) to tokenize and lowercase data, remove punctuation, and replace singletons with a special unk token. Beam size is set to 8. For training, we find it important to pretrain on sequential data and finetune on lattice data (§ 5.6). This is in line with prior work (Sperber et al., 2017) and likely owed to the fact that the lattices in this dataset are rather noisy, hampering training especially during the early stages. We use Adam for training (Kingma and Ba, 2014). For sequential pretraining, we follow the learning schedule with warm-up and decay of Vaswani et al. (2017). Finetuning was sometimes unstable, so we finetune both using the warm-up/decay strategy and using a fixed learning rate of 0.0001 and report the better result. We use large-batch training with minibatch size of 1024 sentences, accumulated over 16 batched computations of 64 sen3Our code is available: http://msperber.com/ research/acl-lattice-selfatt/ Encoder model Inputs Fisher Callh. LSTM4 1-best 35.9 11.8 Seq. SA 1-best 35.71 12.36 Seq. SA (directional) 1-best 37.42 13.00 Graph attention lattice 35.71 11.87 LatticeLSTM4 lattice 38.0 14.1 Lattice SA (proposed) lattice 38.73 14.74 Table 1: BLEU scores on Fisher (4 references) and Callhome (1 reference), for proposed method and several baselines. tences each, due to memory constraints. Early stopping is applied when the BLEU score on a held-out validation set does not improve over 15 epochs, and the model with the highest validation BLEU score is kept. 5.2 Main Results Table 1 compares our model against several baselines. Lattice models tested on Callhome are pretrained on Fisher and finetuned on Callhome lattices (Fisher+Callhome setting), while lattice models tested on Fisher use a Fisher+Fisher training setting. All sequential baselines are trained on the reference transcripts of Fisher. The first set of baselines operates on 1-best (sequential) inputs and includes a bidirectional LSTM, an unmasked self-attentional encoder (SA) of otherwise identical architecture with our proposed model, and a variant with directional masks (Shen et al., 2018). Next, we include a graph-attentional model that masks all but adjacent lattice nodes (Veliˇckovi´c et al., 2018) but is otherwise identical to the proposed model, and a LatticeLSTM. Note that these lattice models both use the cross-attention latticescore bias (§ 3.1). Results show that our proposed model outperforms all examined baselines. Compared to the sequential self-attentional model, our models improves by 1.31–1.74 BLEU points. Compared to the LatticeLSTM, our model improves results by 0.64–0.73 BLEU points, while at the same time being more computationally efficient (§ 5.3). Graph attention is not able to improve over the sequential baselines on our task due to its restriction to local context. 1191 Training Inference Encoder Batching Speed Batching Speed Sequential encoder models LSTM M 4629 – 715 SA M 5021 – 796 LatticeLSTM and lattice SA encoders LSTM – 178 – 391 LSTM A 710 A 538 SA M 2963 – 687 SA A 748 A 718 Table 2: Computation speed (words/sec), averaged over 3 runs. Batching is conducted manually (M), through autobatching (A), or disabled (–). The selfattentional lattice model displays superior speed despite using 3 encoder layers, compared to 2 layers for the LSTM-based models. 5.3 Computation Speed The self-attentional lattice model was motivated not only by promising model accuracy (as confirmed above), but also by potential speed gains. We therefore test computation speed for training and inference, comparing against LSTM- and LatticeLSTM-based models. For fair comparison, we use a reimplementation of the LatticeLSTM so that all models are run with the exact same toolkits and identical decoder architectures. Again, LSTM-based models have two encoder layers, while self-attentional models have three layers. LatticeLSTMs are difficult to speed up through manually implemented batched computations, but similar models have been reported to strongly benefit from autobatching (Neubig et al., 2017b) which automatically finds operations that can be grouped after the computation graph has been defined. Autobatching is implemented in DyNet but not available in many other deep learning toolkits, so we test both with and without autobatching. Training computations are manually or automatically batched across 64 parallel sentences, while inference speed is tested for single sentences with forced decoding of gold translations and without beam search. We test with DyNet commit 8260090 on an Nvidia Titan Xp GPU and average results over three runs. Table 2 shows the results. For sequential inputs, the self-attentional model is slightly faster than the LSTM-based model. The difference is perhaps 4BLEU scores taken from Sperber et al. (2017). reachability mask dir. prob. latt. pos. Fisher Callh. 38.73 14.74 38.25 12.45 37.52 14.37 35.49 12.83 30.58 9.41 Table 3: Ablation over proposed features, including reachability masks, directional (vs. non-directional) masking, probabilistic (vs. binary) masking, and lattice positions (vs. topological positions). smaller than expected, which can be explained by the larger number of layers in the self-attentional model, and the relatively short sentences of the Fisher corpus that reduce the positive effect of parallel computation across sequence positions. For lattice-based inputs, we can see a large speed-up of the self-attentional approach when no autobatching is used. Replacing manual batching with autobatching during training for the self-attentional model yields no benefits. Enabling autobatching at inference time provides some speed-up for both models. Overall, the speed advantage of the selfattentional approach is still very visible even with autobatching available. 5.4 Feature Ablation We next conduct a feature ablation to examine the individual effect of the improvements introduced in § 4. Table 3 shows that longest-path position encoding outperforms topological positions, the probabilistic approach outperforms binary reachability masks, and modeling forward and reversed lattices with separate attention heads outperforms the non-directional approach. Consistently with the findings by Sperber et al. (2017), lattice scores are more effectively exploited on Fisher than on Callhome as a result of the poor lattice quality for the latter. The experiment in the last row demonstrates the effect of keeping the lattice contents but removing all structural information, by rearranging nodes in linear, arbitrary topological order, and applying the best sequential model. Results are poor and structural information clearly beneficial. 5.5 Behavior At Test Time To obtain a better understanding of the proposed model, we compare accuracies to the sequential 1192 Lattice oracle 1-best Lattice Fisher Sequential SA 47.84 37.42 – Lattice SA 47.69 37.56 38.73 Callhome Sequential SA 17.94 13.00 – Lattice SA 18.54 13.90 14.74 Table 4: Fisher and Callhome models, tested by inputting lattice oracle paths, 1-best paths, and full lattices. self-attentional model when translating either lattice oracle paths, 1-best transcripts, or lattices. The lattice model translates sequences by treating them as lattices with only a single complete path and all transition probabilities set to 1. Table 4 shows the results for the Fisher+Fisher model evaluated on Fisher test data, and for the Fisher+Callhome model evaluated on Callhome test data. We can see that the lattice model outperforms the sequential model even when translating sequential 1-best transcripts, indicating benefits perhaps due to more robustness or increased training data size for the lattice model. However, the largest gains stem from using lattices at test time, indicating that our model is able to exploit the actual test-time lattices. Note that there is still a considerable gap to the translation of lattice oracles which form a top-line to our experiments. 5.6 Effect of Pretraining and Finetuning Finally, we analyze the importance of our strategy of pretraining on clean sequential data before finetuning on lattice data. Table 5 shows the results for several combinations of pretraining and finetuning data. The first thing to notice is that pretraining is critical for good results. Skipping pretraining performs extremely poorly, while pretraining on the much smaller Callhome data yields results no better than the sequential baselines (§ 5.2). We conjecture that pretraining is beneficial mainly due to the rather noisy lattice training data, while for tasks with cleaner training lattices pretraining may play a less critical role. The second observation is that for the finetuning stage, domain appears more important than data size: Finetuning on Fisher works best when testing on Fisher, while finetuning on Callhome works best when testing on Callhome, despite the CallSequential data Lattice data Fisher Callh. – Fisher 1.45 1.78 Callhome Fisher 34.52 13.04 Fisher Callhome 35.47 14.74 Fisher Fisher 38.73 14.59 Table 5: BLEU scores for several combinations of Fisher (138k sentences) and Callhome (15k sentences) training data. home finetuning data being an order of magnitude smaller. This is encouraging, because the collection of large amounts of training lattices can be difficult in practice. 6 Related Work The translation of lattices rather than sequences has been investigated with traditional machine translation models (Ney, 1999; Casacuberta et al., 2004; Saleem et al., 2004; Zhang et al., 2005; Matusov et al., 2008; Dyer et al., 2008), but these approaches rely on independence assumptions in the decoding process that no longer hold for neural encoder-decoder models. Neural latticeto-sequence models were proposed by Su et al. (2017); Sperber et al. (2017), with promising results but slow computation speeds. Other related work includes gated graph neural networks (Li et al., 2016; Beck et al., 2018). As an alternative to these RNN-based models, GCNs have been investigated (Duvenaud et al., 2015; Defferrard et al., 2016; Kearnes et al., 2016; Kipf and Welling, 2017), and used for devising tree-tosequence models (Bastings et al., 2017; Marcheggiani et al., 2018). We are not aware of any application of GCNs to lattice modeling. Unlike our approach, GCNs consider only local context, must be combined with slower LSTM layers for good performance, and lack support for lattice scores. Our model builds on previous works on selfattentional models (Cheng et al., 2016; Parikh et al., 2016; Lin et al., 2017; Vaswani et al., 2017). The idea of masking has been used for various purposes, including occlusion of future information during training (Vaswani et al., 2017), introducing directionality (Shen et al., 2018) with good results for machine translation confirmed by Song et al. (2018), and soft masking (Im and Cho, 2017; Sperber et al., 2018). The only extension of self-attention beyond sequence modeling we 1193 are aware of is graph attention (Veliˇckovi´c et al., 2018) which uses only local context and is outperformed by our model. 7 Conclusion This work extended existing sequential selfattentional models to lattice inputs, which have been useful for various purposes in the past. We achieve this by introducing probabilistic reachability masks and lattice positional encodings. Experiments in a speech translation task show that our method outperforms previous approaches and is much faster than RNN-based alternatives in both training and inference settings. Promising future work includes extension to tree-structured inputs and application to other tasks. Acknowledgments The work leading to these results has received funding from the European Union under grant agreement no 825460. References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffry E. Hinton. 2016. Layer Normalization. arXiv:1607.06450. Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Representation Learning (ICLR), San Diego, USA. Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima’an. 2017. Graph Convolutional Encoders for Syntax-aware Neural Machine Translation. In Empirical Methods in Natural Language Processing (EMNLP), Copenhagen, Denmark. Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-Sequence Learning using Gated Graph Neural Networks. In Association for Computational Linguistic (ACL), pages 273–283, Melbourne, Australia. Francisco Casacuberta, Hermann Ney, Franz Josef Och, Enrique Vidal, J. M. Vilar, S. Barrachina, I. Garc´ıa-Varea, D. Llorens, C. Mart´ınez, S. Molau, F. Nevado, M. Pastor, D. Pic´o, A. Sanchis, and C. Tillmann. 2004. Some approaches to statistical and finite-state speech-to-speech translation. Computer Speech and Language, 18(1):25–47. Alberto Cetoli, Stefano Bragaglia, Andrew D. O’Harney, and Marc Sloan. 2017. Graph Convolutional Networks for Named Entity Recognition. In International Workshop on Treebanks and Linguistic Theories (TLT16), pages 37–45, Prague, Czech Republic. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Empirical Methods in Natural Language Processing (EMNLP), Austin, Texas, USA. Micha¨el Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. In Advances in Neural Information Processing Systems (NIPS), pages 3844–3852, Barcelona, Spain. David Duvenaud, Dougal Maclaurin, Jorge AguileraIparraguirre, Rafael G´omez-Bombarelli, Timothy Hirzel, Al´an Aspuru-Guzik, and Ryan P. Adams. 2015. Convolutional Networks on Graphs for Learning Molecular Fingerprints. In Advances in Neural Information Processing Systems (NIPS), pages 2224–2232, Montr´eal, Canada. Christopher Dyer, Smaranda Muresan, and Philip Resnik. 2008. Generalizing Word Lattice Translation. Technical Report LAMP-TR-149, University of Maryland, Institute For Advanced Computer Studies. Yarin Gal and Zoubin Ghahramani. 2016. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. In Neural Information Processing Systems Conference (NIPS), pages 1019–1027, Barcelona, Spain. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional Sequence to Sequence Learning. In International Conference on Machine Learning (ICML), Sydney, Australia. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition (CVPR), pages 770—-778, Las Vegas, USA. Robert L. Hemminger and Lowell W. Beineke. 1978. Line graphs and line digraphs. In Selected Topics in Graph Theory, pages 271–305. Academic Press Inc. Jinbae Im and Sungzoon Cho. 2017. Distance-based Self-Attention Network for Natural Language Inference. arXiv:1712.02047. Steven Kearnes, Kevin McCloskey, Marc Berndl, Vijay Pande, and Patrick Riley. 2016. Molecular Graph Convolutions: Moving Beyond Fingerprints. Journal of Computer-Aided Molecular Design, 30(8):595–608. Diederik P. Kingma and Jimmy L. Ba. 2014. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR), Banff, Canada. 1194 Thomas N. Kipf and Max Welling. 2017. SemiSupervised Classification with Graph Convolutional Networks. International Conference on Learning Representations (ICLR). Faisal Ladhak, Ankur Gandhe, Markus Dreyer, Lambert Mathias, Ariya Rastrow, and Bj¨orn Hoffmeister. 2016. LatticeRnn: Recurrent Neural Networks over Lattices. In Annual Conference of the International Speech Communication Association (InterSpeech), pages 695–699, San Francisco, USA. Yujia Li, Richard Zemel, Mark Brockschmeidt, and Daniel Tarlow. 2016. Gated Graph Sequence Neural Networks. In International Conference on Learning Representations (ICLR). Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A Structured Self-attentive Sentence Embedding. In International Conference on Representation Learning (ICLR), Toulon, France. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attentionbased Neural Machine Translation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1412–1421, Lisbon, Portugal. Diego Marcheggiani, Joost Bastings, and Ivan Titov. 2018. Exploiting Semantics in Neural Machine Translation with Graph Convolutional Networks. In North American Chapter of the Association for Computational Linguistics (NAACL), pages 486– 492, New Orleans, USA. Evgeny Matusov, Bj¨orn Hoffmeister, and Hermann Ney. 2008. ASR word lattice translation with exhaustive reordering is possible. In Annual Conference of the International Speech Communication Association (InterSpeech), pages 2342–2345, Brisbane, Australia. Roberto Navigli and Paola Velardi. 2010. Learning Word-Class Lattices for Definition and Hypernym Extraction. In Association for Computational Linguistic (ACL), pages 1318–1327, Uppsala, Sweden. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017a. DyNet: The Dynamic Neural Network Toolkit. arXiv preprint arXiv:1701.03980. Graham Neubig, Yoav Goldberg, and Chris Dyer. 2017b. On-the-fly Operation Batching in Dynamic Computation Graphs. In Neural Information Processing Systems Conference (NIPS), Long Beach, USA. Graham Neubig, Matthias Sperber, Xinyi Wang, Matthieu Felix, Austin Matthews, Sarguna Padmanabhan, Ye Qi, Devendra Singh Sachan, Philip Arthur, Pierre Godard, John Hewitt, Rachid Riad, and Liming Wang. 2018. XNMT: The eXtensible Neural Machine Translation Toolkit. In Conference of the Association for Machine Translation in the Americas (AMTA) Open Source Software Showcase, Boston, USA. Hermann Ney. 1999. Speech Translation: Coupling of Recognition and Translation. In International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 517–520, Phoenix, USA. Ankur P. Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A Decomposable Attention Model for Natural Language Inference. In Empirical Methods in Natural Language Processing (EMNLP), pages 2249–2255, Austin, USA. Matt Post, Gaurav Kumar, Adam Lopez, Damianos Karakos, Chris Callison-Burch, and Sanjeev Khudanpur. 2013. Improved Speech-to-Text Translation with the Fisher and Callhome Spanish–English Speech Translation Corpus. In International Workshop on Spoken Language Translation (IWSLT), Heidelberg, Germany. Shirin Saleem, Szu-Chen Jou, Stephan Vogel, and Tanja Schultz. 2004. Using Word Lattice Information for a Tighter Coupling in Speech Translation Systems. In International Conference on Spoken Language Processing (ICSLP), pages 41–44, Jeju Island, Korea. Anna Senina, Marcus Rohrbach, Wei Qiu, Annemarie Friedrich, Sikandar Amin, Mykhaylo Andriluka, Manfred Pinkal, and Bernt Schiele. 2014. Coherent multi-sentence video description with variable level of detail. In German Conference on Pattern Recognition (GCPR), pages 184–195, M¨unster, Germany. Springer. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018. DiSAN: Directional Self-Attention Network for RNN/CNNfree Language Understanding. In Conference on Artificial Intelligence (AAAI), New Orleans, USA. Richard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Empirical Methods in Natural Language Processing (EMNLP), pages 1631–1642, Seattle, USA. Kaitao Song, Xu Tan, Furong Peng, and Jianfeng Lu. 2018. Hybrid Self-Attention Network for Machine Translation. arXiv:1811.00253v2. Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel. 2017. Neural Lattice-to-Sequence Models for Uncertain Inputs. In Conference on 1195 Empirical Methods in Natural Language Processing (EMNLP), pages 1380–1389, Copenhagen, Denmark. Matthias Sperber, Jan Niehues, Graham Neubig, Sebastian St¨uker, and Alex Waibel. 2018. SelfAttentional Acoustic Models. In Annual Conference of the International Speech Communication Association (InterSpeech), Hyderabad, India. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15(1):1929–1958. Jinsong Su, Zhixing Tan, Deyi Xiong, Rongrong Ji, Xiaodong Shi, and Yang Liu. 2017. Lattice-Based Recurrent Neural Network Encoders for Neural Machine Translation. In Conference on Artificial Intelligence (AAAI), pages 3302–3308, San Francisco, USA. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the Inception Architecture for Computer Vision. In Computer Vision and Pattern Recognition (CVPR), pages 2818–2826, Las Vegas, USA. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. In Association for Computational Linguistic (ACL), pages 1556–1566, Beijing, China. Shikhar Vashishth, Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha Talukdar. 2018. Dating Documents using Graph Convolution Networks. In Association for Computational Linguistic (ACL), pages 1605–1615, Melbourne, Australia. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Neural Information Processing Systems Conference (NIPS), pages 5998–6008, Long Beach, USA. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph Attention Networks. In International Conference on Learning Representations (ICLR), Vancouver, Canada. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a Foreign Language. In Neural Information Processing Systems Conference (NIPS), Montr´eal, Canada. Ruiqiang Zhang, Genichiro Kikui, Hirofumi Yamamoto, and Wai-Kit Lo. 2005. A Decoding Algorithm for Word Lattice Translation in Speech Translation. In International Workshop on Spoken Language Translation (IWSLT), pages 23–29, Pittsburgh, USA. Xiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long Short-Term Memory Over Recursive Structures. In International Conference on Machine Learning (ICML), pages 1604–1612, Lille, France. 1196 A Path Duplication Invariance Figure 5 shows a sequential lattice, and a lattice derived from it but with a duplicated path. Semantically, both are equivalent, and should therefore result in identical neural representations. Note that while in practice duplicated paths should not occur, paths with partial overlap are quite frequent. It is therefore instructive to consider this hypothetical situation. Below, we demonstrate that the binary masking approach (§ 4.1.1) is biased such that computed representations are impacted by path duplication. In contrast, the probabilistic approach (§ 4.1.2) is invariant to path duplication. We consider the example of Figure 5, discussing only the forward direction, because the lattice is symmetric and computations for the backward direction are identical. We follow notation of Equations 1 through 3, using ⟨a, b⟩as abbrevation for f (q (xa) , k (xb)) and va to abbreviate v(xa). Let us consider the computed representation for the node S as query. For the sequential lattice with binary mask, it is: yS = 1 C  e⟨S,S⟩vS + e⟨S,a⟩va + e⟨S,b⟩vb  (11) Here, C is the softmax normalization term that ensures that exponentiated similarities sum up to 1. In contrast, the lattice with duplication results in a doubled influence of va: yS = 1 C  e⟨S,S⟩vS + e⟨S,a⟩va + e⟨S,a’⟩va’ + e⟨S,E⟩vE  = 1 C  e⟨S,S⟩vS + 2e⟨S,a⟩va + e⟨S,E⟩vE  . The probabilistic approach yields the same result as the binary approach for the sequential lattice (Equation 11). For the lattice with path duplication, the representation for the node S is coma S E a 1 p 1 1-p ‘ a S E 1 1 sequential S a E S 1 1 1 a 0 1 1 E 0 0 1 duplicated S a a’ E S 1 p (1 −p) 1 a 0 1 0 1 a’ 0 0 1 1 E 0 0 0 1 Figure 5: A sequential lattice, and a variant with a duplicated path, where nodes a and a’ are labeled with the same word token. The matrices contain pairwise reaching probabilities in forward direction, where rows are queries, columns are keys. puted as follows: yS = 1 C  e⟨S,S⟩vS + e⟨S,a⟩+log pva + e⟨S,a’⟩+log(1−p)va’ + e⟨S,E⟩vE  = 1 C  e⟨S,S⟩vS + e⟨S,a⟩elog pva + e⟨S,a’⟩elog(1−p)va’ + e⟨S,E⟩vE  = 1 C  e⟨S,S⟩vS + pe⟨S,a⟩va + (1 −p)e⟨S,a’⟩va’ + e⟨S,E⟩vE  = 1 C  e⟨S,S⟩vS + e⟨S,a⟩va + e⟨S,E⟩vE  . The result is the same as in the semantically equivalent sequential case (Equation 11), the computation is therefore invariant to path duplication. The same argument can be extended to other queries, to other lattices with duplicated paths, as well as to the lattice-biased encoder-decoder attention. B Qualitative Analysis We conduct a manual inspection and showcase several common patterns in which the lattice input helps improve translation quality, as well as one counter example. In particular, we compare the outputs of the sequential and lattice models according to the 3rd and the last row in Table 1, on Fisher. 1197 B.1 Example 1 In this example, the ASR 1-best contains a bad word choice (quedar instead of qu´e tal). The correct word is in the lattice, and can be disambiguated by exploiting long-range self-attentional encoder context. gold transcript: Qu´e tal, eh, yo soy Guillermo, ¿C´omo est´as? ASR 1-best: quedar eh yo soy guillermo c´omo est´as seq2seq output: stay eh i ’ m guillermo how are you ASR lattice: quedar S .7 .2 que dar qué tal … … … .1 1 1 lat2seq output: how are you eh i ’ m guillermo how are you B.2 Example 2 Here, the correct word graduar does not appear in the lattice, instead the lattice offers many incorrect alternatives of high uncertainty. The translation model evidently goes with a linguistically plausible guess, ignoring the source side. gold transcript: Claro Es, eh, eh, o sea, yo me, me voy a graduar con un t´ıtulo de esta universidad. ASR 1-best: claro existe eh o sea yo me me puedo habar con un t´ıtulo esta universidad seq2seq output: sure it exists i mean i can talk with a title ASR lattice: quedar S .7 .2 que dar qué tal … … … .1 1 1 puedo voy habar ahora a hablar grabar lavar … … … … … 1 .1 .9 .1 .7 .2 lat2seq output: sure i mean i ’ m going to take a university title B.3 Example 3 In this example, o sea (I mean) appears with slightly lower confidence than saben (they know), but is chosen for a more natural sounding target sentence gold transcript: No, o sea, eso es eh, clar´ısimo para mi ASR 1-best: no saben eso es eh clar´ısimo para mi seq2seq output: they don ’ t know that ’ s eh sure for me ASR lattice: quedar S .7 .2 que dar qué tal … … … .1 1 1 o S .34 .37 saben no … sea … … .29 lat2seq output: no i mean that ’ s very clear for me B.4 Counter Example In this counter example, the translation model gets confused from the additional and wrong lattice context and no longer produces the correct output. gold transcript: s´ı ASR 1-best: s´ı seq2seq output: yes ASR lattice: quedar S .7 .2 que dar qué tal … … … .1 1 1 puedo voy habar ahora a hablar grabar lavar … … … … … 1 .1 .9 .1 .7 .2 o S .34 .37 saben no … sea … … .29 S .107 mhm mm sí .392 E .502 lat2seq output: mm
2019
115
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1198–1212 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1198 When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion Elena Voita1,2 Rico Sennrich3,4 Ivan Titov3,2 1Yandex, Russia 2University of Amsterdam, Netherlands 3University of Edinburgh, Scotland 4University of Zurich, Switzerland [email protected] [email protected] [email protected] Abstract Though machine translation errors caused by the lack of context beyond one sentence have long been acknowledged, the development of context-aware NMT systems is hampered by several problems. Firstly, standard metrics are not sensitive to improvements in consistency in document-level translations. Secondly, previous work on context-aware NMT assumed that the sentence-aligned parallel data consisted of complete documents while in most practical scenarios such document-level data constitutes only a fraction of the available parallel data. To address the first issue, we perform a human study on an English-Russian subtitles dataset and identify deixis, ellipsis and lexical cohesion as three main sources of inconsistency. We then create test sets targeting these phenomena. To address the second shortcoming, we consider a set-up in which a much larger amount of sentence-level data is available compared to that aligned at the document level. We introduce a model that is suitable for this scenario and demonstrate major gains over a context-agnostic baseline on our new benchmarks without sacrificing performance as measured with BLEU.1 1 Introduction With the recent rapid progress of neural machine translation (NMT), translation mistakes and inconsistencies due to the lack of extra-sentential context are becoming more and more noticeable among otherwise adequate translations produced by standard context-agnostic NMT systems (Läubli et al., 2018). Though this problem has recently triggered a lot of attention to contextaware translation (Jean et al., 2017a; Wang et al., 2017; Tiedemann and Scherrer, 2017; Bawden 1We release code and data sets at https://github.com/lena-voita/ good-translation-wrong-in-context. et al., 2018; Voita et al., 2018; Maruf and Haffari, 2018; Agrawal et al., 2018; Miculicich et al., 2018; Zhang et al., 2018), the progress and widespread adoption of the new paradigm is hampered by several important problems. Firstly, it is highly non-trivial to design metrics which would reliably trace the progress and guide model design. Standard machine translation metrics (e.g., BLEU) do not appear appropriate as they do not sufficiently differentiate between consistent and inconsistent translations (Wong and Kit, 2012).2 For example, if multiple translations of a name are possible, forcing consistency is essentially as likely to make all occurrences of the name match the reference translation as making them all different from the reference. Second, most previous work on context-aware NMT has made the assumption that all the bilingual data is available at the document level. However, isolated parallel sentences are a lot easier to acquire and hence only a fraction of the parallel data will be at the document level in any practical scenario. In other words, a context-aware model trained only on documentlevel parallel data is highly unlikely to outperform a context-agnostic model estimated from much larger sentence-level parallel corpus. This work aims to address both these shortcomings. A context-agnostic NMT system would often produce plausible translations of isolated sentences, however, when put together in a document, these translations end up being inconsistent with each other. We investigate which linguistic phenomena cause the inconsistencies using the OpenSubtitles (Lison et al., 2018) corpus for the English-Russian language pair. We identify deixis, ellipsis and lexical cohesion as three 2We use the term ‘inconsistency’ to refer to any violations causing good translations of isolated sentences not to work together, independently of which linguistic phenomena (e.g., ellipsis or lexical cohesion) impose the violated constraints. 1199 main sources of the violations, together amounting to about 80% of the cases. We create test sets focusing specifically on the three identified phenomena (6000 examples in total). We show that by using a limited amount of document-level parallel data, we can already achieve substantial improvements on these benchmarks without negatively affecting performance as measured with BLEU. Our approach is inspired by the Deliberation Networks (Xia et al., 2017). In our method, the initial translation produced by a baseline context-agnostic model is refined by a context-aware system which is trained on a small document-level subset of parallel data. The key contributions are as follows: • we analyze which phenomena cause contextagnostic translations to be inconsistent with each other; • we create test sets specifically addressing the most frequent phenomena; • we consider a novel and realistic set-up where a much larger amount of sentencelevel data is available compared to that aligned at the document level; • we introduce a model suitable for this scenario, and demonstrate that it is effective on our new benchmarks without sacrificing performance as measured with BLEU. 2 Analysis We begin with a human study, in which we: 1. identify cases when good sentence-level translations are not good when placed in context of each other, 2. categorize these examples according to the phenomena leading to a discrepancy in translations of consecutive sentences. The test sets introduced in Section 3 will then target the most frequent phenomena. 2.1 Human annotation To find what makes good context-agnostic translations incorrect when placed in context of each other, we start with pairs of consecutive sentences. We gather data with context from the publicly available OpenSubtitles2018 corpus (Lison et al., all one/both bad both good bad pair good pair 2000 211 140 1649 100% 11% 7% 82% Table 1: Human annotation statistics of pairs of consecutive translation. 2018) for English and Russian. We train a contextagnostic Transformer on 6m sentence pairs. Then we translate 2000 pairs of consecutive sentences using this model. For more details on model training and data preprocessing, see Section 5.3. Then we use human annotation to assess the adequacy of the translations without context and in the context of each other. The whole process is two-stage: 1. sentence-level evaluation: we ask if the translation of a given sentence is good, 2. evaluation in context: for pairs of consecutive good translations according to the first stage, we ask if the translations are good in context of each other. In the first stage, the annotators are instructed to mark as “good” translations which (i) are fluent sentences in the target language (in our case, Russian) (ii) can be reasonable translations of a source sentence in some context. For the second stage we only consider pairs of sentences with good sentence-level translations. The annotators are instructed to mark translations as bad in context of each other only if there is no other possible interpretation or extra additional context which could have made them appropriate. This was made to get more robust results, avoiding the influence of personal preferences of the annotators (for example, for using formal or informal speech), and excluding ambiguous cases that can only be resolved with additional context. The statistics of answers are provided in Table 1. We find that our annotators labelled 82% of sentence pairs as good translations. In 11% of cases, at least one translation was considered bad at the sentence level, and in another 7%, the sentences were considered individually good, but bad in context of each other. This indicates that in our setting, a substantial proportion of translation errors are only recognized as such in context. 1200 type of phenomena frequency deixis 37% ellipsis 29% lexical cohesion 14% ambiguity 9% anaphora 6% other 5% Table 2: Types of phenomena causing discrepancy in context-agnostic translation of consecutive sentences when placed in the context of each other type of discrepancy frequency T-V distinction 67% speaker/addressee gender: same speaker 22% different speaker 9% other 2% Table 3: Types of discrepancy in context-agnostic translation caused by deixis (excluding anaphora) 2.2 Types of phenomena From the results of the human annotation, we take all instances of consecutive sentences with good translations which become incorrect when placed in the context of each other. For each, we identify the language phenomenon which caused a discrepancy. The results are provided in Table 2. Below we discuss these types of phenomena, as well as problems in translation they cause, in more detail. In the scope of current work, we concentrate only on the three most frequent phenomena. 2.2.1 Deixis In this category, we group several types of deictic words or phrases, i.e. referential expressions whose denotation depends on context. This includes personal deixis (“I”, “you”), place deixis (“here”, “there”), and discourse deixis, where parts of the discourse are referenced (“that’s a good question.”). Most errors in our annotated corpus are related to person deixis, specifically gender marking in the Russian translation, and the T-V distinction between informal and formal you (Latin “tu” and “vos”). In many cases, even when having access to neighboring sentences, one cannot make a confident decision which of the forms should be used, as there are no obvious markers pointing to one form or another (e.g., for the T-V distinction, words such as “officer”, “mister” for formal and “honey”, “dude” for informal). However, when (a) EN We haven’t really spoken much since your return. Tell me, what’s on your mind these days? RU Мы не разговаривали с тех пор, как вы вернулись. Скажи мне, что у тебя на уме в последнее время? RU My ne razgovarivali s tekh por, kak vy vernulis’. Skazhi mne, chto u tebya na ume v posledneye vremya? (b) EN I didn’t come to Simon’s for you. I did that for me. RU Я пришла к Саймону не ради тебя. Я сделал это для себя. RU Ya prishla k Saymonu ne radi tebya. Ya sdelal eto dlya sebya. Figure 1: Examples of violation of (a) T-V form consistency, (b) speaker gender consistency. In color: (a) red – V-form, blue – T-form; (b) red – feminine, blue – masculine. pronouns refer to the same person, the pronouns, as well as verbs that agree with them, should be translated using the same form. See Figure 1(a) for an example translation that violates T-V consistency. Figure 1(b) shows an example of inconsistent first person gender (marked on the verb), although the speaker is clearly the same. Anaphora are a form of deixis that received a lot of attention in MT research, both from the perspective of modelling (Le Nagard and Koehn, 2010; Hardmeier and Federico, 2010; Jean et al., 2017b; Bawden et al., 2018; Voita et al., 2018, among others) and targeted evaluation (Hardmeier et al., 2015; Guillou and Hardmeier, 2016; Müller et al., 2018), and we list anaphora errors separately, and will not further focus on them. 2.2.2 Ellipsis Ellipsis is the omission from a clause of one or more words that are nevertheless understood in the context of the remaining elements. In machine translation, elliptical constructions in the source language pose a problem if the target language does not allow the same types of ellipsis (requiring the elided material to be predicted from context), or if the elided material affects the syntax of the sentence; for example, the grammatical function of a noun phrase and thus its inflection in Russian may depend on the elided verb (Figure 2(a)), or the verb inflection may depend on the 1201 type of discrepancy frequency wrong morphological form 66% wrong verb (VP-ellipsis) 20% other error 14% Table 4: Types of discrepancy in context-agnostic translation caused by ellipsis (a) EN You call her your friend but have you been to her home ? Her work ? RU Ты называешь её своей подругой, но ты был у неё дома? Её работа? RU Ty nazyvayesh’ yeyo svoyey podrugoy, no ty byl u neye doma? Yeyo rabota? (b) EN Veronica, thank you, but you saw what happened. We all did. RU Вероника, спасибо, но ты видела, что произошло. Мы все хотели. RU Veronika, spasibo, no ty videla, chto proizoshlo. My vse khoteli. Figure 2: Examples of discrepancies caused by ellipsis. (a) wrong morphological form, incorrectly marking the noun phrase as a subject. (b) correct meaning is “see”, but MT produces хотели khoteli (“want”). elided subject. Our analysis focuses on ellipses that can only be understood and translated with context beyond the sentence-level. This has not been studied extensively in MT research.3 We classified ellipsis examples which lead to errors in sentence-level translations by the type of error they cause. Results are provided in Table 4. It can be seen that the most frequent problems related to ellipsis that we find in our annotated corpus are wrong morphological forms, followed by wrongly predicted verbs in case of verb phrase ellipsis in English, which does not exist in Russian, thus requiring the prediction of the verb in the Russian translation (Figure 2(b)). 2.2.3 Lexical cohesion Lexical cohesion has been studied previously in MT (Tiedemann, 2010; Gong et al., 2011; Wong and Kit, 2012; Kuang et al., 2018; Miculicich et al., 2018, among others). There are various cohesion devices (Morris and Hirst, 1991), and a good translation should exhibit lexical cohesion beyond the sentence level. We 3Exceptions include (Yamamoto and Sumita, 1998), and work on the related phenomenon of pronoun dropping (Russo et al., 2012; Wang et al., 2016; Rios and Tuggener, 2017). (a) EN Not for Julia. Julia has a taste for taunting her victims. RU Не для Джулии. Юлия умеет дразнить своих жертв. RU Ne dlya Dzhulii. Yuliya umeyet draznit’ svoikh zhertv. (b) EN But that’s not what I’m talking about. I’m talking about your future. RU Но я говорю не об этом. Речь о твоём будущем. RU No ya govoryu ne ob etom. Rech’ o tvoyom budushchem. Figure 3: Examples of lack of lexical cohesion in MT. (a) Name translation inconsistency. (b) Inconsistent translation. Using either of the highlighted translations consistently would be good. focus on repetition with two frequent cases in our annotated corpus being reiteration of named entities (Figure 3(a)) and reiteration of more general phrase types for emphasis (Figure 3(b)) or in clarification questions. 3 Test Sets For the most frequent phenomena from the above analysis we create test sets for targeted evaluation. Each test set contains contrastive examples. It is specifically designed to test the ability of a system to adapt to contextual information and handle the phenomenon under consideration. Each test instance consists of a true example (sequence of sentences and their reference translation from the data) and several contrastive translations which differ from the true one only in the considered aspect. All contrastive translations we use are correct plausible translations at a sentence level, and only context reveals the errors we introduce. All the test sets are guaranteed to have the necessary context in the provided sequence of 3 sentences. The system is asked to score each candidate example, and we compute the system accuracy as the proportion of times the true translation is preferred over the contrastive ones. Test set statistics are shown in Table 5. 3.1 Deixis From Table 3, we see that the most frequent error category related to deixis in our annotated corpus is the inconsistency of T-V forms when translating second person pronouns. The test set we 1202 latest relevant context total 1st 2nd 3rd deixis 3000 1000 1000 1000 lex. cohesion 2000 855 630 515 ellipsis (infl.) 500 ellipsis (VP) 500 Table 5: Size of test sets: total number of test instances and with regard to the latest context sentence with politeness indication or with the named entity under consideration. For ellipsis, we distinguish whether model has to predict correct noun phrase inflection, or correct verb sense (VP ellipsis). construct for this category tests the ability of a machine translation system to produce translations with consistent level of politeness. We semi-automatically identify sets of consecutive sentences with consistent politeness markers on pronouns and verbs (but without nominal markers such as “’Mr.” or “officer”) and switch T and V forms. Each automatic step was followed by human postprocessing, which ensures the quality of the final test sets.4 This gives us two sets of translations for each example, one consistently informal (T), and one consistently formal (V). For each, we create an inconsistent contrastive example by switching the formality of the last sentence. The symmetry of the test set ensures that any contextagnostic model has 50% accuracy on the test set. 3.2 Ellipsis From Table 4, we see that the two most frequent types of ambiguity caused by the presence of an elliptical structure have different nature, hence we construct individual test sets for each of them. Ambiguity of the first type comes from the inability to predict the correct morphological form of some words. We manually gather examples with such structures in a source sentence and change the morphological inflection of the relevant target phrase to create contrastive translation. Specifically, we focus on noun phrases where the verb is elided, and the ambiguity lies in how the noun phrase is inflected. The second type we evaluate are verb phrase ellipses. Mostly these are sentences with an auxiliary verb “do” and omitted main verb. We manually gather such examples and replace the translation of the verb, which is only present on the target side, with other verbs with different meaning, but 4Details are provided in the appendix. the same inflection. Verbs which are used to construct such contrastive translations are the top-10 lemmas of translations of the verb “do” which we get from the lexical table of Moses (Koehn et al., 2007) induced from the training data. 3.3 Lexical cohesion Lexical cohesion can be established for various types of phrases and can involve reiteration or other semantic relations. In the scope of the current work, we focus on the reiteration of entities, since these tend to be non-coincidental, and can be easily detected and transformed. We identify named entities with alternative translations into Russian, find passages where they are translated consistently, and create contrastive test examples by switching the translation of some instances of the named entity. For more details, please refer to the appendix. 4 Model and Setting 4.1 Setting Previous work on context-aware neural machine translation used data where all training instances have context. This setting limits the set of available training sets one can use: in a typical scenario, we have a lot of sentence-level parallel data and only a small fraction of document-level data. Since machine translation quality depends heavily on the amount of training data, training a contextaware model is counterproductive if this leads to ignoring the majority of available sentence-level data and sacrificing general quality. We will also show that a naive approach to combining sentencelevel and document-level data leads to a drop in performance. In this work, we argue that it is important to consider an asymmetric setting where the amount of available document-level data is much smaller than that of sentence-level data, and propose an approach specifically targeting this scenario. 4.2 Model We introduce a two-pass framework: first, the sentence is translated with a context-agnostic model, and then this translation is refined using context of several previous sentences (context includes source sentences as well as their translations). We expect this architecture to be suitable in the proposed setting: the baseline context-agnostic model can be trained on a large amount of sentence-level 1203 Figure 4: Model architecture data, and the second-pass model can be estimated on a smaller subset of parallel data which includes context. As the first-pass translation is produced by a strong model, we expect no loss in general performance when training the second part on a smaller dataset. The model is close in spirit to the Deliberation networks (Xia et al., 2017). The first part of the model is a context-agnostic model (we refer to it as the base model), and the second one is a contextaware decoder (CADec) which refines contextagnostic translations using context. The base model is trained on sentence-level data and then fixed. It is used only to sample context-agnostic translations and to get vector representations of the source and translated sentences. CADec is trained only on data with context. Let Dsent = {(xi, yi)}N i=1 denote the sentencelevel data with n paired sentences and Ddoc = {(xj, yj, cj)}M j=1 denote the document-level data, where (xj, yj) is source and target sides of a sentence to be translated, cj are several preceding sentences along with their translations. Base model For the baseline context-agnostic model we use the original Transformerbase (Vaswani et al., 2017), trained to maximize the sentence-level log-likelihood 1 N P (xi,yi)∈Dsent log P(yi|xi, θB). Context-aware decoder (CADec) The contextaware decoder is trained to correct translations given by the base model using contextual information. Namely, we maximize the following document-level log-likelihood: 1 M X (xj,yj)∈Ddoc log EyB j ∝P(y|xj,θB)P(yj|xj, yB j , cj, θC), where yB j is sampled from P(y|xj, θB). CADec is composed of a stack of N = 6 identical layers and is similar to the decoder of the original Transformer. It has a masked self-attention layer and attention to encoder outputs, and additionally each layer has a block attending over the outputs of the base decoder (Figure 4). We use the states from the last layer of the base model’s encoder of the current source sentence and all context sentences as input to the first multi-head attention. For the second multi-head attention we input both last states of the base decoder and the target-side token embedding layer; this is done for translations of the source and also all context sentences. All sentence representations are produced by the base model. To encode the relative position of each sentence, we concatenate both the encoder and decoder states with one-hot vectors representing their position (0 for the source sentence, 1 for the immediately preceding one, etc). These distance embeddings are shown in blue in Figure 4. 5 Experiments 5.1 Training At training time, we use reference translations as translations of the previous sentences. For the cur1204 rent sentence, we either sample a translation from the base model or use a corrupted version of the reference translation. We propose to stochastically mix objectives corresponding to these versions: 1 M X (xj,yj)∈Ddoc log h bj · P(yj|xj, ˜yj, cj, θC))+ + (1 −bj) · P(yj|xj, yB j , cj, θC) i , where ˜yj is a corrupted version of the reference translation and bj ∈{0, 1} is drawn from Bernoulli distribution with parameter p, p = 0.5 in our experiments. Reference translations are corrupted by replacing 20% of their tokens with random tokens. We discuss the importance of the proposed training strategy, as well as the effect of varying the value of p, in Section 6.5. 5.2 Inference As input to CADec for the current sentence, we use the translation produced by the base model. Target sides of the previous sentences are produced by our two-stage approach for those sentences which have context and with the base model for those which do not. We use beam search with a beam of 4 for all models. 5.3 Data and setting We use the publicly available OpenSubtitles2018 corpus (Lison et al., 2018) for English and Russian. As described in detail in the appendix, we apply data cleaning after which only a fraction of data has context of several previous sentences. We use up to 3 context sentences in this work. We randomly choose 6 million training instances from the resulting data, among which 1.5m have context of three sentences. We randomly choose two subsets of 10k instances for development and testing and construct our contrastive test sets from 400k held-out instances from movies not encountered in training. The hyperparameters, preprocessing and training details are provided in the supplementary material. 6 Results We evaluate in two different ways: using BLEU for general quality and the proposed contrastive test sets for consistency. We show that models indistinguishable with BLEU can be very different in terms of consistency. We randomly choose 500 out of 2000 examples from the lexical cohesion set and 500 out of 3000 from the deixis test set for validation and leave the rest for final testing. We compute BLEU on the development set as well as scores on lexical cohesion and deixis development sets. We use convergence in both metrics to decide when to stop training. The importance of using both criteria is discussed in Section 6.4. After the convergence, we average 5 checkpoints and report scores on the final test sets. 6.1 Baselines We consider three baselines. baseline The context-agnostic baseline is Transformer-base trained on all sentence-level data. Recall that it is also used as the base model in our 2-stage approach. concat The first context-aware baseline is a simple concatenation model. It is trained on 6m sentence pairs, including 1.5m having 3 context sentences. For the concatenation baseline, we use a special token separating sentences (both on the source and target side). s-hier-to-2.tied This is the version of the model s-hier-to-2 introduced by Bawden et al. (2018), where the parameters between encoders are shared (Müller et al., 2018). The model has an additional encoder for source context, whereas the target side of the corpus is concatenated, in the same way as for the concatenation baseline. Since the model is suitable only for one context sentence, it is trained on 6m sentence pairs, including 1.5m having one context sentence. We chose s-hier-to-2.tied as our second context-aware baseline because it also uses context on the target side and performed best in a contrastive evaluation of pronoun translation (Müller et al., 2018). 6.2 General results BLEU scores for our model and the baselines are given in Table 6.5 For context-aware models, all sentences in a group were translated, and then only the current sentence is evaluated. We also report BLEU for the context-agnostic baseline trained only on 1.5m dataset to show how the performance is influenced by the amount of data. We observe that our model is no worse in BLEU than the baseline despite the second-pass model 5We use bootstrap resampling (Koehn, 2004) for significance testing. 1205 model BLEU baseline (1.5m) 29.10 baseline (6m) 32.40 concat 31.56 s-hier-to-2.tied 26.68 CADec 32.38 Table 6: BLEU scores. CADec trained with p = 0.5. Scores for CADec are not statistically different from the baseline (6m). being trained only on a fraction of the data. In contrast, the concatenation baseline, trained on a mixture of data with and without context is about 1 BLEU below the context-agnostic baseline and our model when using all 3 context sentences. CADec’s performance remains the same independently from the number of context sentences (1, 2 or 3) as measured with BLEU. s-hier-to-2.tied performs worst in terms of BLEU, but note that this is a shallow recurrent model, while others are Transformer-based. It also suffers from the asymmetric data setting, like the concatenation baseline. 6.3 Consistency results Scores on the deixis, cohesion and ellipsis test sets are provided in Tables 7 and 8. For all tasks, we observe a large improvement from using context. For deixis, the concatenation model (concat) and CADec improve over the baseline by 33.5 and 31.6 percentage points, respectively. On the lexical cohesion test set, CADec shows a large improvement over the context-agnostic baseline (12.2 percentage points), while concat performs similarly to the baseline. For ellipsis, both models improve substantially over the baseline (by 19-51 percentage points), with concat stronger for inflection tasks and CADec stronger for VPellipsis. Despite its low BLEU score, s-hier-to2.tied also shows clear improvements over the context-agnostic baseline in terms of consistency, but underperforms both the concatenation model and CADec, which is unsurprising given that it uses only one context sentence. When looking only at the scores where the latest relevant context is in the model’s context window (column 2 in Table 7), s-hier-to-2.tied outperforms the concatenation baseline for lexical cohesion, but remains behind the performance of CADec. The proposed test sets let us distinguish models latest relevant context total 1st 2nd 3rd deixis baseline 50.0 50.0 50.0 50.0 concat 83.5 88.8 85.6 76.4 s-hier-to-2.tied 60.9 83.0 50.1 50.0 CADec 81.6 84.6 84.4 75.9 lexical cohesion baseline 45.9 46.1 45.9 45.4 concat 47.5 48.6 46.7 46.7 s-hier-to-2.tied 48.9 53.0 46.1 45.4 CADec 58.1 63.2 52.0 56.7 Table 7: Accuracy for deixis and lexical cohesion. ellipsis (infl.) ellipsis (VP) baseline 53.0 28.4 concat 76.2 76.6 s-hier-to-2.tied 66.4 65.6 CADec 72.2 80.0 Table 8: Accuracy on ellipsis test set. Figure 5: BLEU and lexical cohesion accuracy on the development sets during CADec training. which are otherwise identical in terms of BLEU: the performance of the baseline and CADec is the same when measured with BLEU, but very different in terms of handling contextual phenomena. 6.4 Context-aware stopping criteria Figure 5 shows that for context-aware models, BLEU is not sufficient as a criterion for stopping: even when a model has converged in terms of BLEU, it continues to improve in terms of consistency. For CADec trained with p = 0.5, BLEU score has stabilized after 40k batches, but the lexical cohesion score continues to grow. 1206 p BLEU deixis lex. c. ellipsis p=0 32.34 84.1 48.7 65 / 75 p=0.25 32.31 83.3 52.4 67 / 78 p=0.5 32.38 81.6 58.1 72 / 80 p=0.75 32.45 80.0 65.0 70 / 80 Table 9: Results for different probabilities of using corrupted reference at training time. BLEU for 3 context sentences. For ellipsis, we show inflection/VP scores. 6.5 Ablation: using corrupted reference At training time, CADec uses either a translation sampled from the base model or a corrupted reference translation as the first-pass translation of the current sentence. The purpose of using a corrupted reference instead of just sampling is to teach CADec to rely on the base translation and not to change it much. In this section, we discuss the importance of the proposed training strategy. Results for different values of p are given in Table 9. All models have about the same BLEU, not statistically significantly different from the baseline, but they are quite different in terms of incorporating context. The denoising positively influences almost all tasks except for deixis, yielding the largest improvement on lexical cohesion. 7 Additional Related Work In concurrent work, Xiong et al. (2018) also propose a two-pass context-aware translation model inspired by deliberation network. However, while they consider a symmetric data scenario where all available training data has document-level context, and train all components jointly on this data, we focus on an asymmetric scenario where we have a large amount of sentence-level data, used to train our first-pass model, and a smaller amount of document-level data, used to train our secondpass decoder, keeping the first-pass model fixed. Automatic evaluation of the discourse phenomena we consider is challenging. For lexical cohesion, Wong and Kit (2012) count the ratio between the number of repeated and lexically similar content words over the total number of content words in a target document. However, Guillou (2013); Carpuat and Simard (2012) find that translations generated by a machine translation system tend to be similarly or more lexically consistent, as measured by a similar metric, than human ones. This even holds for sentence-level systems, where the increased consistency is not due to improved cohesion, but accidental – Ott et al. (2018) show that beam search introduces a bias towards frequent words, which could be one factor explaining this finding. This means that a higher repetition rate does not mean that a translation system is in fact more cohesive, and we find that even our baseline is more repetitive than the human reference. 8 Conclusions We analyze which phenomena cause otherwise good context-agnostic translations to be inconsistent when placed in the context of each other. Our human study on an English–Russian dataset identifies deixis, ellipsis and lexical cohesion as three main sources of inconsistency. We create test sets focusing specifically on the identified phenomena. We consider a novel and realistic set-up where a much larger amount of sentence-level data is available compared to that aligned at the document level and introduce a model suitable for this scenario. We show that our model effectively handles contextual phenomena without sacrificing general quality as measured with BLEU despite using only a small amount of document-level data, while a naive approach to combining sentence-level and document-level data leads to a drop in performance. We show that the proposed test sets allow us to distinguish models (even though identical in BLEU) in terms of their consistency. To build context-aware machine translation systems, such targeted test sets should prove useful, for validation, early stopping and for model selection. Acknowledgments We would like to thank the anonymous reviewers for their comments and Ekaterina Enikeeva for the help with initial phenomena classification. The authors also thank Yandex Machine Translation team for helpful discussions and inspiration. Ivan Titov acknowledges support of the European Research Council (ERC StG BroadSem 678254) and the Dutch National Science Foundation (NWO VIDI 639.022.518). Rico Sennrich acknowledges support from the Swiss National Science Foundation (105212_169888), the European Union’s Horizon 2020 research and innovation programme (grant agreement no 825460), and the Royal Society (NAF\R1\180122). 1207 References Ruchit Agrawal, Turchi Marco, and Negri Matteo. 2018. Contextual Handling in Neural Machine Translation: Look Behind, Ahead and on Both Sides. Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2018. Evaluating Discourse Phenomena in Neural Machine Translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1304–1313, New Orleans, USA. Association for Computational Linguistics. Marine Carpuat and Michel Simard. 2012. The trouble with smt consistency. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 442–449, Montréal, Canada. Association for Computational Linguistics. Zhengxian Gong, Min Zhang, and Guodong Zhou. 2011. Cache-based document-level statistical machine translation. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 909–919, Edinburgh, Scotland, UK. Association for Computational Linguistics. Liane Guillou. 2013. Analysing lexical consistency in translation. In Proceedings of the Workshop on Discourse in Machine Translation, pages 10–18, Sofia, Bulgaria. Association for Computational Linguistics. Liane Guillou and Christian Hardmeier. 2016. Protest: A test suite for evaluating pronouns in machine translation. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Resources Association (ELRA). Christian Hardmeier and Marcello Federico. 2010. Modelling Pronominal Anaphora in Statistical Machine Translation. In Proceedings of the seventh International Workshop on Spoken Language Translation (IWSLT), pages 283–289. Christian Hardmeier, Preslav Nakov, Sara Stymne, Jörg Tiedemann, Yannick Versley, and Mauro Cettolo. 2015. Pronoun-focused mt and cross-lingual pronoun prediction: Findings of the 2015 discomt shared task on pronoun translation. In Proceedings of the Second Workshop on Discourse in Machine Translation, pages 1–16. Association for Computational Linguistics. Sebastien Jean, Stanislas Lauly, Orhan Firat, and Kyunghyun Cho. 2017a. Does Neural Machine Translation Benefit from Larger Context? In arXiv:1704.05135. ArXiv: 1704.05135. Sébastien Jean, Stanislas Lauly, Orhan Firat, and Kyunghyun Cho. 2017b. Neural machine translation for cross-lingual pronoun prediction. In Proceedings of the Third Workshop on Discourse in Machine Translation, pages 54–57. Association for Computational Linguistics. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representation (ICLR 2015). Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brook Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics. Mikhail Korobov. 2015. Morphological analyzer and generator for russian and ukrainian languages. In Analysis of Images, Social Networks and Texts, volume 542 of Communications in Computer and Information Science, pages 320–332. Springer International Publishing. Shaohui Kuang, Deyi Xiong, Weihua Luo, and Guodong Zhou. 2018. Modeling coherence for neural machine translation with dynamic and topic caches. In Proceedings of the 27th International Conference on Computational Linguistics, pages 596–606. Association for Computational Linguistics. Samuel Läubli, Rico Sennrich, and Martin Volk. 2018. Has Machine Translation Achieved Human Parity? A Case for Document-level Evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4791–4796. Association for Computational Linguistics. Ronan Le Nagard and Philipp Koehn. 2010. Aiding pronoun translation with co-reference resolution. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 252–261, Uppsala, Sweden. Association for Computational Linguistics. Pierre Lison, Jörg Tiedemann, and Milen Kouylekov. 2018. Opensubtitles2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. Sameen Maruf and Gholamreza Haffari. 2018. Document context neural machine translation with memory networks. In Proceedings of the 56th Annual 1208 Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1275– 1284, Melbourne, Australia. Association for Computational Linguistics. Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2947–2954, Brussels, Belgium. Association for Computational Linguistics. Jane Morris and Graeme Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics (Volume 17), pages 21–48. Mathias Müller, Annette Rios, Elena Voita, and Rico Sennrich. 2018. A Large-Scale Test Set for the Evaluation of Context-Aware Pronoun Translation in Neural Machine Translation. In Proceedings of the Third Conference on Machine Translation: Research Papers , pages 61–72, Belgium, Brussels. Association for Computational Linguistics. Myle Ott, Michael Auli, David Grangier, and Marc’Aurelio Ranzato. 2018. Analyzing uncertainty in neural machine translation. In ICML, volume 80 of JMLR Workshop and Conference Proceedings, pages 3953–3962. JMLR.org. Martin Popel and Ondrej Bojar. 2018. Training Tips for the Transformer Model. pages 43–70. Annette Rios and Don Tuggener. 2017. Co-reference resolution of elided subjects and possessive pronouns in spanish-english statistical machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 657–662, Valencia, Spain. Association for Computational Linguistics. Lorenza Russo, Sharid Loáiciga, and Asheesh Gulati. 2012. Improving machine translation of null subjects in italian and spanish. In Proceedings of the Student Research Workshop at the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 81–89, Avignon, France. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Jörg Tiedemann. 2010. Context adaptation in statistical machine translation using models with exponentially decaying cache. In Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing, pages 8–15, Uppsala, Sweden. Association for Computational Linguistics. Jörg Tiedemann and Yves Scherrer. 2017. Neural Machine Translation with Extended Context. In Proceedings of the Third Workshop on Discourse in Machine Translation, DISCOMT’17, pages 82–92, Copenhagen, Denmark. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, Los Angeles. Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine translation learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1264–1274, Melbourne, Australia. Association for Computational Linguistics. Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting Cross-Sentence Context for Neural Machine Translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP’17, pages 2816– 2821, Denmark, Copenhagen. Association for Computational Linguistics. Longyue Wang, Zhaopeng Tu, Xiaojun Zhang, Hang Li, Andy Way, and Qun Liu. 2016. A novel approach to dropped pronoun translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 983–993, San Diego, California. Association for Computational Linguistics. Billy T. M. Wong and Chunyu Kit. 2012. Extending machine translation evaluation metrics with lexical cohesion to document level. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1060–1068, Jeju Island, Korea. Association for Computational Linguistics. Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass decoding. In NIPS, Los Angeles. Hao Xiong, Zhongjun He, Hua Wu, and Haifeng Wang. 2018. Modeling Coherence for Discourse Neural Machine Translation. In arXiv:1811.05683. ArXiv: 1811.05683. Kazuhide Yamamoto and Eiichiro Sumita. 1998. Feasibility study for ellipsis resolution in dialogues by machine-learning technique. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2. Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018. 1209 Improving the transformer translation model with document-level context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 533–542, Brussels, Belgium. Association for Computational Linguistics. A Protocols for test sets In this section we describe the process of constructing the test suites. A.1 Deixis English second person pronoun “you” may have three different interpretations important when translating into Russian: the second person singular informal (T form), the second person singular formal (V form) and second person plural (there is no T-V distinction for the plural from of second person pronouns). Morphological forms for second person singular (V form) and second person plural pronoun are the same, that is why to automatically identify examples in the second person polite form, we look for morphological forms corresponding to second person plural pronouns. To derive morphological tags for Russian, we use publicly available pymorphy26 (Korobov, 2015). Below, all the steps performed to obtain the test suite are described in detail. A.1.1 Automatic identification of politeness For each sentence we try to automatically find indications of using T or V form. Presence of the following words and morphological forms are used as indication of usage of T/V forms: 1. second person singular or plural pronoun, 2. verb in a form corresponding to second person singular/plural pronoun, 3. verbs in imperative form, 4. possessive forms of second person pronouns. For 1-3 we used morphological tags predicted by pymorphy2, for 4th we used hand-crafted lists of forms of second person pronouns, because pymorphy2 fails to identify them. 6https://github.com/kmike/pymorphy2 A.1.2 Human postprocessing of identification of politeness After examples with presence of indication of usage of T/V form are extracted automatically, we manually filter out examples where 1. second person plural form corresponds to plural pronoun, not V form, 2. there is a clear indication of politeness. The first rule is needed as morphological forms for second person plural and second person singular V form pronouns and related verbs are the same, and there is no simple and reliable way to distinguish these two automatically. The second rule is to exclude cases where there is only one appropriate level of politeness according to the relation between the speaker and the listener. Such markers include “Mr.”, “Mrs.”, “officer”, “your honour” and “sir”. For the impolite form, these include terms denoting family relationship (“mom”, “dad”), terms of endearment (“honey”, “sweetie”) and words like “dude” and “pal”. A.1.3 Automatic change of politeness To construct contrastive examples aiming to test the ability of a system to produce translations with consistent level of politeness, we have to produce an alternative translation by switching the formality of the reference translation. First, we do it automatically: 1. change the grammatical number of second person pronouns, verbs, imperative verbs, 2. change the grammatical number of possessive pronouns. For the first transformation we use pymorphy2, for the second use manual lists of possessive second person pronouns, because pymorphy2 can not change them automatically. A.1.4 Human postprocessing of automatic change of politeness We manually correct the translations from the previous step. Mistakes of the described automatic change of politeness happen because of: 1. ambiguity arising when imperative and indicative verb forms are the same, 1210 2. inability of pymorphy2 to inflect the singular number to some verb forms (e.g., to inflect singular number to past tense verbs), 3. presence of related adjectives, which have to agree with the pronoun, 4. ambiguity arising when a plural form of a pronoun may have different singular forms. A.1.5 Human annotation: are both polite and impolite versions appropriate? After the four previous steps, we have text fragments of several consecutive sentences with consistent level of politeness. Each fragment uses second person singular pronouns, either T form or V form, without nominal markers indicating which of the forms is the only one appropriate. For each group we have both the original version, and the version with the switched formality. To control for appropriateness of both levels of politeness in the context of a whole text fragment we conduct a human annotation. Namely, humans are given both versions of the same text fragment corresponding to different levels of politeness, and asked if these versions are natural. The answers they can pick are the following: 1. both appropriate, 2. polite version is not appropriate, 3. impolite version is not appropriate, 4. both versions are bad. The annotators are not given any specific guidelines, and asked to answer according to their intuition as a native speaker of the language (Russian). There are a small number of examples where one of the versions is not appropriate and not equally natural as the other one: 4%. Cases where annotators claimed both versions to be bad come from mistakes in target translations: OpenSubtitles data is not perfect, and target sides contain translations which are not reasonable sentences in Russian. These account for 1.5% of all examples. We do not include these 5.5% of examples in the resulting test sets. A.2 Lexical cohesion The process of creating the lexical cohesion test set consists of several stages: 1. find passages where named entities are translated consistently, 2. extract alternative translations for these named entities from the lexical table of Moses (Koehn et al., 2007) induced from the training data, 3. construct alternative translations of each example by switching the translation of instances of the named entity, 4. for each example construct several test instances. A.2.1 Identification of examples with consistent translations We look for infrequent words that are translated consistently in a text fragment. Since the target language has rich morphology, to verify that translations are the same we have to use lemmas of the translations. More precisely, we 1. train Berkeley aligner on about 6.5m sentence pairs from both training and held-out data, 2. find lemmas of all words in the reference translations in the held-out data using pymorphy2, 3. find words in the source which are not in the 5000 most frequent words in our vocabulary whose translations have the same lemma. A.2.2 Finding alternative translations For the words under consideration, we find alternative translations which would be (i) equally appropriate in the context of the remaining sentence and text fragment (ii) possible for the model to produce. To address the first point, we focus on named entities, and we assume that all translations of a given named entity seen in the training data are appropriate. To address the second point, we choose alternative translations from the reference translations encountered in the training data, and pick only ones with a probability at least 10%. The sequence of actions is as follows: 1. train Moses on the training data (6m sentence pairs), 2. for each word under consideration (from A.2.1), get possible translations from the lexical table of Moses, 1211 3. group possible translations by their lemma using pymorphy2, 4. if a lemma has a probability at least 10%, we consider this lemma as possible translation for the word under consideration, 5. leave only examples with the word under consideration having several alternative translations. After that, more than 90% of examples are translations of named entities (incl. names of geographical objects). We manually filter the examples with named entities. A.2.3 Constructing a test set From the two previous steps, we have examples with named entities in context and source sentences and several alternative translations for each named entity. Then we 1. construct alternative translations of each example by switching the translation of instances of the named entity; since the target language has rich morphology, we do it manually, 2. for each example, construct several test instances. For each version of the translation of a named entity, we use this translation in the context, and vary the translation of the entity in the current sentence to create one consistent, and one or more inconsistent (contrastive) translation. B Experimental setup B.1 Data preprocessing We use the publicly available OpenSubtitles2018 corpus (Lison et al., 2018) for English and Russian.7 We pick sentence pairs with a relative time overlap of subtitle frames between source and target language subtitles of at least 0.9 to reduce noise in the data. As context, we take the previous sentence if its timestamp differs from the current one by no more than 7 seconds. Each long group of consecutive sentences is split into fragments of 4 sentences, with the first 3 sentences treated as context. More precisely, from a group of consecutive sentences s1, s2, . . . , sn we get (s1, . . . , s4), (s2, . . . , s5), . . . , (sn−3, sn). For CADec we also 7http://opus.nlpl.eu/ OpenSubtitles2018.php include (s1, s2) and (s1, s2, s3) as training examples. We do not add these two groups with less context for the concatenation model, because in preliminary experiments, this performed worse both in terms of BLEU and consistency as measured on our test sets. We use the tokenization provided by the corpus and use multi-bleu.perl8 on lowercased data to compute BLEU score. We use beam search with a beam of 4 for both base model and CADec. Sentences were encoded using byte-pair encoding (Sennrich et al., 2016), with source and target vocabularies of about 32000 tokens. Translation pairs were batched together by approximate sequence length. For the Transformer models (baselines and concatenation) each training batch contained a set of translation pairs containing approximately 160009 source tokens. It has been shown that Transformer’s performance depends heavily on the batch size (Popel and Bojar, 2018), and we chose a large batch size to ensure that models show their best performance. For CADec, we use a batch size that contains approximately the same number of translation instances as the baseline models. B.2 Model parameters We follow the setup of Transformer base model (Vaswani et al., 2017). More precisely, the number of layers in the base encoder, base decoder and CADed is N = 6. We employ h = 8 parallel attention layers, or heads. The dimensionality of input and output is dmodel = 512, and the innerlayer of a feed-forward networks has dimensionality dff = 2048. We use regularization as described in (Vaswani et al., 2017). B.3 Optimizer The optimizer we use is the same as in (Vaswani et al., 2017). We use the Adam optimizer (Kingma and Ba, 2015) with β1 = 0.9, β2 = 0.98 and ε = 10−9. We vary the learning rate over the course of training, according to the formula: lrate = scale · min(step_num−0.5, step_num · warmup_steps−1.5) 8https://github.com/moses-smt/ mosesdecoder/tree/master/scripts/generic 9This can be reached by using several of GPUs or by accumulating the gradients for several batches and then making an update. 1212 We use warmup_steps = 16000, scale = 4 for the models trained on 6m data (baseline (6m) and concatenation) and scale = 1 for the models trained on 1.5m data (baseline (1.5m) and CADec).
2019
116
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1213–1223 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1213 A Compact and Language-Sensitive Multilingual Translation Method Yining Wang1,2, Long Zhou1,2, Jiajun Zhang1,2∗, Feifei Zhai4, Jingfang Xu4 and Chengqing Zong1,2,3 1National Laboratory of Pattern Recognition, CASIA, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China 3CAS Center for Excellence in Brain Science and Intelligence Technology, Beijing, China 4Sogou Inc., Beijing, China {yining.wang, long.zhou, jjzhang, cqzong}@nlpr.ia.ac.cn {zhaifeifei,xujingfang}@sogou-inc.com Abstract Multilingual neural machine translation (Multi-NMT) with one encoder-decoder model has made remarkable progress due to its simple deployment. However, this multilingual translation paradigm does not make full use of language commonality and parameter sharing between encoder and decoder. Furthermore, this kind of paradigm cannot outperform the individual models trained on bilingual corpus in most cases. In this paper, we propose a compact and language-sensitive method for multilingual translation. To maximize parameter sharing, we first present a universal representor to replace both encoder and decoder models. To make the representor sensitive for specific languages, we further introduce language-sensitive embedding, attention, and discriminator with the ability to enhance model performance. We verify our methods on various translation scenarios, including one-to-many, many-to-many and zero-shot. Extensive experiments demonstrate that our proposed methods remarkably outperform strong standard multilingual translation systems on WMT and IWSLT datasets. Moreover, we find that our model is especially helpful in low-resource and zero-shot translation scenarios. 1 Introduction Encoder-decoder based sequence-to-sequence architecture (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Zhang and Zong, 2015; Vaswani et al., 2017; Gehring et al., 2017) facilitates the development of multilingual neural machine translation (Multi-NMT) (Dong et al., 2015; Luong et al., 2016; Firat et al., 2016; Johnson et al., 2017; Gu et al., 2018). The domi∗Jiajun Zhang is the corresponding author and the work is done while Yining Wang is doing research intern at Sogou Inc.  N  La ng-S ens i Emb ed ding La ng-Sens i at tent ion Discrimn at or M Sr c N Tg t Enc oder Deco der Shar ing Rep res en to r N Lang-Sensi Embedding Lang-Sensi Attention Lang-Sensi Discrimnator M Src N Tgt Encoder Decoder Sharing Representor Figure 1: Our proposed compact representor, replacing encoder and decoder, can perform multilingual translation from M source languages to N target languages. We also introduce three specific modules consisting of language-sensitive embedding, language-sensitive attention, and language-sensitive discriminator. nant paradigm of Multi-NMT contains one encoder to represent multiple languages and one decoder to generate output tokens of separate languages (Johnson et al., 2017; Ha et al., 2016). This paradigm is widely used in Multi-NMT systems due to simple implementation and convenient deployment. However, this paradigm has two drawbacks. For one hand, using single encoder-decoder framework for all language pairs usually yields inferior performance compared to individually trained single-pair models in most cases (Lu et al., 2018; Platanios et al., 2018; Wang et al., 2018). For the other hand, although this paradigm saves lots of parameters compared to another Multi-NMT framework which employs separate encoders and decoders to handle different languages (Dong et al., 2015; Zoph and Knight, 2016; Luong et al., 2016; Firat et al., 2016), parameter sharing between encoder and decoder are not fully explored. Since both encoder and decoder have similar 1214 structures but use different parameters, the commonality of languages cannot be fully exploited in this paradigm. A natural question arises that why not share the parameters between encoder and decoder on multilingual translation scenario? To address these issues, we present a compact and language-sensitive method in this work, as shown in Figure 1. We first propose a unified representor by tying encoder and decoder weights in Multi-NMT model, which can not only reduce parameters but also make full use of language commonality and universal representation. To enhance the model ability to distinguish different languages, we further introduce languagesensitive embedding, attention, and discriminator. We conduct extensive experiments to verify the effectiveness of our proposed model on various Multi-NMT tasks including one-to-many and many-to-many which is further divided into balanced, unbalanced and zero-shot. Experimental results demonstrate that our model can significantly outperform the strong standard baseline multilingual systems and achieve even better performance than individually trained models on most of the language pairs. Specifically, our contributions are three-fold in this work: (1) We present a universal representor to replace encoder and decoder, leading to a compact translation model, which fully explores the commonality between languages. (2) We introduce language-sensitive embedding, attention, and discriminator which augment the ability of Multi-NMT model in distinguishing different languages. (3) Extensive experiments demonstrate the superiority of our proposed method on various translation tasks including one-to-many, many-to-many and zero-shot scenarios. Moreover, for many-tomany using unbalance translation pairs, we can achieve the new state-of-the-art results on IWSLT15 English-Vietnamese. For zero-shot translation, our methods can achieve even better results than individually trained models with the parallel corpus. 2 Background In this section, we will introduce the background of the encoder-decoder (Sutskever et al., 2014; Cho et al., 2014) framework and self-attentionbased Transformer (Vaswani et al., 2017). 2.1 Encoder-Decoder Framework Given a set of sentence pairs D = {(x, y)}, the encoder fenc with parameters θenc maps an input sequence x = (x1, x2, · · · , xn) to a sequence of continuous representations henc = (henc 1 , henc 2 , · · · , henc n ) whose size varies concerning the source sentence length. The decoder fdec with θdec generates an output sequence y = (y1, y2, · · · , ym) by computing P(yt|y<t) as follows: P(yt |y<t) = softmax( f (hdec, ct)) (1) where hdec is a sequence of continuous representations for the decoder and ct is the context vector which can be calculated as follows: ct = n Õ i=1 ai,t henc i (2) where ai,t is attention weight: ai,t = softmax(ei,t) = exp ei,t Ín j=1 exp ej,t (3) where ei,t is a similarity score between the source and target representations. The parameters of calculating cross-attention weight ai,t are denoted as θattn. The encoder and decoder are trained to maximize the conditional probability of target sequence given a source sequence: Lt(D; θ) = |D| Õ d=1 M Õ t=1 log P(yt |y<t, x; θenc, θdec, θattn) (4) where M is target sentence length. For simplicity, we do not specify d in this formula. Both the encoder and decoder can be implemented by the different basic neural models structures, such as RNN (LSTM/GRU) (Sutskever et al., 2014; Cho et al., 2014), CNN (Gehring et al., 2017), and self-attention (Vaswani et al., 2017). Our proposed method can be applied to any encoder-decoder architecture. Considering the excellent translation performance of self-attention based Transformer (Vaswani et al., 2017), we implement our method based on this architecture. 2.2 Transformer Network Transformer is a stacked network with several layers containing two or three basic blocks in each layer. For a single layer in the encoder, it consists of a multi-head self-attention and a position-wise 1215 feed-forward network. For the decoder model, besides the above two basic blocks, a multi-head cross-attention follows multi-head self-attention. In this block, the calculation method of similarity score et in Equation 3 is a little different from Luong et al. (2015) and Bahdanau et al. (2015): ei,t = 1 √dm Wkhenc i ∗Wqhdec t (5) where dm is the dimension of hidden units, Wk and Wq are parameters of this cross-attention block, which are denoted as θattn in Equation 4. All the basic blocks are associated with residual connections, followed by layer normalization (Ba et al., 2016). Since the Transformer network contains no recurrence, positional embeddings are used in the model to make use of sequence order. More details regarding the architecture can be found in Vaswani et al. (2017). 2.3 Multilingual Translation In contrast to NMT models, multilingual models perform the multi-task paradigm with some degree of parameter sharing, in which models are jointly trained on multiple language pairs. We mainly focus on mainstream multilingual translation method proposed by Johnson et al. (2017), which has a unified encoder-decoder framework with a shared attention module for multiple language pairs. They decompose the probability of the target sequences into the products of per token probabilities in all translation forms: Lm−t(D; θ) = L Õ l=1 |Dl | Õ d=1 M Õ t=1 log P(yl t |xl, yl <t; θenc, θdec, θattn) (6) where L is the number of translation pairs and P(yl t|xl, yl <t; θ) denotes the translation probability of t-th word of the d-th sentence in l-th translation pair. Note that the translation process for all target languages uses the same parameter set θ. 3 Our Method In this section, we introduce our compact and language-sensitive method for multilingual translation, which can compress the model by a representor and improve model ability with languagesensitive modules. 3.1 A Compact Representor In Multi-NMT model, the encoder and decoder are two key components, which play analogous Embeddings Multi-head Self-Attention Residual&Norm N Softmax Linear Output Probabilities Language-Sensitive Discriminator + Position Embdding lang-1 lang-2 ... lang-n lang-1 lang-2 ... lang-n lang-1 lang-2 ... lang-n src tgt Language-Sensitive Embedding + + + + + + Representor Residual&Norm Feed Forward Residual&Norm Multi-head Language-Sensitive Cross-Attention Figure 2: The framework of Multi-NMT using our compact and language-sensitive method. roles and have a similar structure in each layer. We argue that encoder and decoder can share the same parameters if necessary. Thus, we introduce a representor to replace both encoder and decoder by sharing weight parameters of the self-attention block, feed-forward block and the normalization block, as shown in Figure 2. The representor parameters are denoted θrep. Therefore, the objective function (Equation 6) becomes: Lm−t(D; θ) = L Õ l=1 |Dl | Õ d=1 M Õ t=1 log P(yl t |xl, yl <t; θrep, θattn) (7) This representor (θrep) coordinates the semantic presentation of multiple languages in a closely related universal level, which also increases the utilization of commonality for different languages. 3.2 Language-Sensitive Modules The compact representor maximizes the sharing of parameters and makes full use of language commonality. However, it lacks the ability to discriminate different languages. In our method, we introduce three language-sensitive modules to enhance our model as follows: 1) Language-Sensitive Embedding: Previously, Press and Wolf (2017) conduct the weight tying of input and output embedding in NMT model. Generally, a shared vocabulary is built 1216 upon subword units like BPE (Sennrich et al., 2016b) and wordpiece (Wu et al., 2016; Schuster and Nakajima, 2012). However, it remains under-exploited which kind of embedding sharing is best for Multi-NMT. We divide the sharing manners into four categories including languagebased manner (LB, different languages have separate input embeddings), direction-based manner (DB, languages in source side and target side have different input embeddings), representorbased manner (RB, shared input embeddings for all languages) and three-way weight tying manner (TWWT) proposed in Press and Wolf (2017), in which the output embedding of the target side is also shared besides representor-based sharing. We compare these four sharing manners for MultiNMT in our experiments, and we will discuss the results in Section 5. Considering the last three sharing manners cannot model a sense of which language a token belongs to, we propose a new language-sensitive embedding in our method to specify different languages explicitly. Similar to the position embeddings described in Section 2, this kind of embedding is added to the embedding of each token for corresponding language, which can indicate the translation direction on the source side and guide the generation process for target languages. This embedding is denoted as Elang ∈R|K |∗dmodel, where |K| is the number of languages involved, and dmodel is the dimension of hidden states in our model. Note that this embedding can be learned during training. 2) Language-Sensitive Attention: In NMT architecture, cross-attention only appearing in the decoder network locates the most-relevant source part when generating each token in target language. For Multi-NMT, we introduce three different ways to design the cross-attention mechanism, consisting of i) shared-attention, ii) hybridattention, and iii)) language-sensitive attention utilized in our method. i): In our proposed compact representor, we share self-attention block between encoder and decoder. For the shared-attention, we make a further step to share parameters of cross-attention and self-attention, which can be regarded as coordination of information from both the source side and target side. ii): Different from the above attention mechanism, the hybrid-attention utilizes independent cross-attention modules but it is shared for all translation tasks. iii): In the language-sensitive attention, it allows the model to select the cross-attention parameters associated with specific translation tasks dynamically. In our paper, we investigate these three attention mechanisms. We argue that both the shared and hybrid mechanisms tend to be confused to extract information from different source languages when decoding multiple source languages with different word orders. Thus, we mainly focus on languages-sensitive attention in our method. To this end, we use multiple sets of parameters θattn to represent cross-attention modules of different translation tasks. However, language-sensitive attention does not support zero-shot translation because there is no explicit training set for this specific translation task. Therefore, we employ hybrid-attention mechanism in our zero-shot experiments. 3) Language-Sensitive Discriminator: In our method, the representor which shares encoder and decoder makes full use of language commonality, but it weakens the model ability to distinguish different languages. Hence we introduce a new language-sensitive discriminator to strengthen model representation. In NMT framework, the hidden states on the top layer can be viewed as a fine-grained abstraction (Anastasopoulos and Chiang, 2018). For this language-sensitive module, we first employ a neural model fdis on the top layer of reprensentor hrep top, and the output of this model is a language judgment score Plang. hdis = fdis(hrep top) Plang(d) = softmax(Wdis ∗hdis d + bdis) (8) where Plang(d) is language judgment score for sentence pair d, Wdis, bdis are parameters, which are denoted as θdis. We test two different types of neural models for fdis, including convolutional network with max pooling layer and two-layer feedforward network. And then, we obtain an discriminant objective function as follows: Ldis (θdis) = Õ k∈K |D| Õ d=1 I {gd = k} ∗logPlang (d) (9) where I {·} is indicator function, and gd belongs to language k. 1217 Finally, we incorporate the language-sensitive discriminator into our Multi-NMT model, and it can be optimized through an end-to-end manner for all translation language pairs D with the following objective function. L(D; θ) =L(D; θrep, θattn, θdis) =(1 −λ)Lm−t(θrep, θattn) + λLdis(θdis) (10) where λ is learned or pre-defined weight to balance the translation task and language judgment task. 4 Experimental Settings 4.1 Data In this section, we describe the datasets using in our experiments on one-to-many and many-tomany multilingual translation scenarios. One-to-Many: For this translation scenario, we perform one-to-two, one-to-three, and oneto-four multilingual translation on the combination of WMT-141 (English-to-German, briefly En→De), WMT-172 datasets (English-to-Latvian, briefly En→Lv) and WMT-183 (English-toFinnish, English-to-Chinese without UN part4, briefly En→Fi and En→Zh) datasets. Many-to-Many: For many-to-many translation, we test our methods on IWSLT-175 translation datasets, including English, Italian, Romanian, Dutch (briefly, En, It, Ro, Nl). In order to perform zero-shot translation, we discard some particular language pairs. We also evaluate our method on the unbalanced training corpus. To this end, we construct the training corpus using resource-rich En-De, En-Fi in WMT datasets and low-resource English-Vietnamese (briefly, En-Vi) in IWSLT-156. The statistical information of all the datasets is detailed in Table 1. 4.2 Training Details We implement our compact and languagesensitive method for Multi-NMT based on the tensor2tensor7 library. We use wordpiece method (Wu et al., 2016; Schuster and Nakajima, 2012) to 1http://www.statmt.org/wmt14/translation-task.html 2http://www.statmt.org/wmt17/translation-task.html 3http://www.statmt.org/wmt18/translation-task.html 4https://cms.unov.org/UNCorpus/ 5https://sites.google.com/site/iwsltevaluation2017 6https://sites.google.com/site/iwsltevaluation2015 7https://github.com/tensorflow/tensor2tensor Datasets Language pair Train Dev Test WMT En-De 4.50M 6003 3003 En-Lv 4.50M 2003 2001 En-Fi 3.25M 3000 3000 En-Zh 9.02M 2002 2001 IWSLT En-It 231.6k 929 1566 En-Ro 220.5k 914 1678 En-Nl 237.2k 1003 1777 Ro-It 217.5k 914 1643 En-Vi 130.9k 768 1268 Table 1: The statistics of all the datasets including WMT and IWSLT tasks. encode the combination of both source side sentences and target side sentences. The vocabulary size is 37,000 for both sides. We train our models using configuration transformer base adopted by Vaswani et al. (2017), which contains a 6layer encoder and a 6-layer decoder with 512dimensional hidden representations. Each minibatch contains roughly 3,072 source and 3,072 target tokens, which belongs to one translation direction. We use Adam optimizer (Kingma and Ba, 2014) with β1=0.9, β2=0.98, and ϵ=10−9. For evaluation, we use beam search with a beam size of k = 4 and length penalty α = 0.6. All our methods are trained and tested on a single Nvidia P40 GPU. 5 Results and Analysis In this section, we discuss the results of our experiments about our compact and language-sensitive method on Multi-NMT. The translation performance is evaluated by character-level BLEU5 for En→Zh translation and case-sensitive BLEU4 (Papineni et al., 2002) for other translation tasks. In our experiments, the models trained on individual language pair are denoted by NMT Baselines, and the baseline Multi-NMT models are denoted by Multi-NMT Baselines. 5.1 One-to-Many Translation 5.1.1 Main Results The main results on the one-to-many translation scenario, including one-to-two, one-to-three and one-to-four translation tasks are reported in Table 2. We present a typical Multi-NMT adopting Johnson et al. (2017) method on Transformer as our Multi-NMT baselines model. Obviously, Multi-NMT Baselines cannot outperform NMT Baselines in all cases, among which four directions are comparable and twelve are worse. 1218 Task Tgt NMT Baselines Multi-NMT Baselines Johnson et al. (2017) Three-Stgy Wang et al. (2018) Rep+Emb Rep+Emb +Attn Rep+Emb +Attn+Dis One-to-Two De 27.50 27.26 27.35 26.60 26.96 27.74 Lv 16.28 16.32 16.38 15.37 15.87 16.79 De 27.50 27.88 27.89 26.96 27.32 27.96 Fi 16.83 16.47 16.70 15.78 16.58 16.89 De 27.50 26.80 26.99 26.08 26.68 27.45 Zh 26.04 25.54 25.78 24.48 25.33 26.17 One-to-Three De 27.50 25.44 25.55 24.82 25.45 26.06 Zh 26.04 24.87 25.63 24.12 24.93 26.12 Fi 16.83 16.86 16.97 16.06 16.78 17.12 De 27.50 25.98 26.12 24.88 25.80 26.42 Lv 16.28 14.88 15.44 14.51 15.58 16.31 Fi 16.83 16.94 17.05 16.15 16.79 17.22 One-to-Four De 27.50 23.59 22.88 22.88 23.58 24.08 Lv 16.28 15.57 16.02 15.00 16.21 16.57 Zh 26.04 25.24 25.83 24.15 25.27 26.29 Fi 16.83 13.45 14.12 12.99 14.11 15.03 Table 2: Translation performance on one-to-two, one-to-three and one-to-four translation tasks. Rep denotes our proposed representor. Emb, Attn, and Dis represent our proposed language-sensitive methods to address multilingual translation. Note that the source language of all our experiments is English. 100 100 100 41 27.3 20.5 29.8 19.2 14.5 35.2 22.5 18.7 35.4 22.6 18.8 0 20 40 60 80 100 120 one-to-two one-to-three one-to-four Indiv Baselines Rep+Emb Rep+Emb+Attn Rep+Emb+Attn+Dis Number of Parameters per Language Pair (M) Tasks Figure 3: The comparison of model scale among individually trained system, baselines Multi-NMT system and our methods. Y-axis represents the model parameters per language pair, which is calculated by averaging model parameters on all translation tasks involved. With respect to our proposed method, it is clear that our compact method consistently outperforms the baseline systems. Compared with another strong one-to-many translation model Three-Stgy proposed by Wang et al. (2018), our compact method can achieve better results as well. Moreover, our method can perform even better than individually trained systems in most cases (eleven out of sixteen cases). The results demonstrate the effectiveness of our method. 5.1.2 Model Size Besides improving the translation results, we also compress the model size by introducing the representor. We investigate the scale of parameters used on average in each translation direction. We compare three models, including NMT Baselines model, Multi-NMT Baselines model, and our compact Multi-NMT model. As shown in Figure 3, all Src→Tgt Emb Manners Size Tgt-1 Tgt-2 En→De/Lv LB 139M 26.58 15.76 DB 100M 27.22 16.26 RB 82M 27.26 16.32 TWWT 63M 26.82 16.02 En→De/Zh LB 139M 27.34 25.61 DB 100M 27.15 25.22 RB 82M 27.22 25.38 TWWT 63M 26.91 24.99 Table 3: Size (number of parameters) and BLEU scores of various embedding sharing manners. LB, DB, RB, TWWT denote language-based manner, directionbased manner, representor-based manner, and threeway weight tying manner separately, as mentioned in Section 3.2. Tgt-1 and Tgt-2 mean the results of the first (De) and the second (Lv/Zh) target language. the multilingual translation models reduce the parameters. Compared with Multi-NMT Baselines, we can observe that our method further reduces the model size of Multi-NMT. Considering Table 2 and Figure 3 together, we note that even though our proposed method in one-to-four translation task only uses 18.8% parameters of NMT Baselines, we can achieve better performance on En→Zh and En→Lv. 5.1.3 Discussion of Language-Sensitive Modules Table 2 shows that our proposed languagesensitive modules are complementary with each other. In this subsection, we will analyze each module in detail. Language-Sensitive Embedding: As mentioned in section 3.2, embedding sharing man1219 0.01 0.03 0.05 0.07 0.09 22.4 22.6 22.8 23.2 23.4 BLEU Score 22.57 22.76 23.19 23.07 22.67 22.63 22.91 23.35 23.19 22.74 CNN on En-De FFN on En-De 23.0 0.01 0.03 0.05 0.07 0.09 22.2 22.4 22.6 22.8 23.0 BLEU Score 22.21 22.37 22.97 22.78 22.43 22.15 22.28 22.72 22.56 22.24 CNN on En-Lv FFN on En-Lv Figure 4: The comparison of two neural models with different hyper-parameter λ. CNN and FFN denote convolution network and feed-forward network, respectively. ners for Multi-NMT are divided into four categories. We show the results of these sharing manners in Table 3. To make a fair comparison, we sample 4.5M sentence pairs from En-Zh dataset. As shown in this table, our representorbased sharing manner consistently outperforms both the direction-based manner and three-way weight tying manner. Furthermore, even though the representor-based manner has about 40% fewer parameters than the language-based manner, it achieves comparable or even better performance. We find that language-based sharing manner is unstable because it achieves the highest BLEU score on Multi-NMT of similar languages (En→De/Zh), but the worst quality on dissimilar languages (En→De/Lv). Taking into account of translation quality and stability, we choose to use representor-based sharing manner in our method. As described in Section 3.2, our proposed language-sensitive embedding is added to the input embedding of each token, which is unlike convention Multi-NMT method adding a special token into source side sentences or vocabularies (Johnson et al., 2017; Ha et al., 2016). There exists a question, is this kind of embeddings essential in our representor? To make a verification, we do the ablation study without this module. We observe that Multi-NMT model does not converge during training, which demonstrates these language-sensitive embeddings play a significant role in our model. Language-Sensitive Attention: We present three types of cross-attention mechanisms in Section 3.2. We adopt shared-attention and language-sensitive attention for Rep+Emb and Rep+Emb+Attn separately. Comparing these two methods in Table 2, Rep+Emb+Attn method outperforms Rep+Emb method in all cases, which demonstrates the language-sensitive is useful for multiple language pairs with different word order. We also conduct the experiment of our representor with the hybrid-attention mechanism. Since this method has similar performance with Rep+Emb but is larger in size, we ignore its results here. Language-Sensitive Discriminator: In section 3.2, we employ two different types of the neural model as a language-sensitive discriminator, and there is a hyper-parameter λ in Equation 10. We present the effect of convolutional network and feed-forward network with different hyper-parameters on development datasets in Figure 4. Considering that distinguishing between languages is only an auxiliary task in Multi-NMT, we set the maximum of λ to be 0.1. As shown in Figure 4, when we adopt the convolution network as our discriminator with λ = 0.05, our languagesensitive method performs best. We also conduct the experiments in which the hyper-parameter λ is learnable. The experiment results are similar to the best settings mentioned above both on En→De (23.35 vs. 23.19) and En→Lv (22.97 vs. 22.72). For simplicity, all our experiments listed in Table 2 and 4 adopt convolution network as the languagesensitive discriminator with λ = 0.05. 5.2 Many-to-Many Translation Table 4 reports the detailed results of different methods under the many-to-many translation scenario. We will analyze the performance below. 1220 Task Src→Tgt NMT Baselines Multi-NMT Baselines Johnson et al. (2017) Rep+Emb Rep+Emb +Attn Rep+Emb +Attn+Dis Many-to-Many for Balanced Corpus I Supervised Four-to-Four En→It 28.41 29.53 29.47 29.98 30.23 It→En 30.66 31.70 31.76 32.23 32.75 En→Ro 21.41 22.23 22.16 22.87 23.53 Ro→En 26.09 27.69 27.58 27.98 28.32 En→Nl 25.88 27.88 26.96 27.32 27.96 Nl→En 27.48 28.67 28.58 28.86 29.32 It→Ro 12.77 13.86 13.89 14.35 14.89 Ro→It 13.54 14.78 14.66 14.87 15.22 II Zero-Shot Nl→Ro 14.15 13.70 13.98 15.12 15.54 Ro→Nl 14.33 13.91 14.17 14.86 15.41 It→Nl 18.24 17.97 18.02 18.98 19.74 Nl→It 18.11 17.59 18.16 19.18 19.87 Many-to-Many for Unbalanced Corpus III Supervised Three-to-Three En→De 27.60 24.39 23.78 25.45 26.06 De→En 32.23 28.85 28.14 28.98 30.37 En→Fi 16.83 14.58 13.82 14.26 14.77 Fi→En 22.37 19.60 19.15 19.96 21.03 En→Vi 26.78 28.89 28.84 30.49 32.01 Vi→En 25.72 27.19 27.27 29.14 31.71 Table 4: Translation performance under the many-to-many scenario, consisting of supervised four-to-four and zero-shot translation on the balanced corpus, and supervised three-to-three on the unbalanced corpus. Note that we do not use the Nl-Ro and It-Nl language pairs in our many-to-many translation task for the balanced corpus. 5.2.1 Results of Balanced Corpus In part I of Table 4, our compact and languagesensitive method (Rep+Emb+Attn+Dis) performs consistently better than corresponding Multi-NMT Baselines, and it can achieve the improvements up to 1.30 BELU points (23.53 vs. 22.23 on En→Ro). Although Rep+Emb method dramatically reduces the model parameters, it performs on par with Multi-NMT Baselines. Compared with NMT Baselines model, our method also achieves better results, which is nearly 2 BLEU points on average. Experimental results on our balanced corpus demonstrate that our method is robust and valid under the many-to-many translation scenario. 5.2.2 Results of Unbalanced Corpus For unbalanced corpus, our method can achieve better results than Multi-NMT Baselines as well, as shown in part III of Table 4. Moreover, from the last two lines of this part, we can observe that compared with NMT Baselines, the translation quality of En↔Vi can achieve the improvements up to 5.23/5.99 BLEU points (32.01/31.71 vs. 26.78/25.72), both of which are new state-ofthe-art on these translation tasks to the best of our knowledge. The results show that our method is more effective in low-resource language pairs, especially for the unbalanced corpus. 5.2.3 Zero-Shot Results Part II in Table 4 shows the performance of zeroshot translation. Note that we conduct experiments of this translation scenario using hybridattention mechanism. Compared with Multi-NMT Baselines, our compact and language-sensitive method performs significantly better with the improvement as large as 2.28 BLEU points on Nl→It. Note that the training datasets do not contain parallel data for Nl-Ro and It-Nl. It is interesting to figure out the translation performance of Nl↔Ro and It↔Nl when bilingual training corpus is available. We conduct experiments of NMT Baselines on Nl-Ro and It-Nl with all sentence pairs in IWSLT-17 (about 200k), which is similar to other training pairs in our balanced corpus. As shown in part II, Multi-NMT Baselines underperform the NMT Baselines on all cases. However, our method performs better than NMT Baselines, and it achieves the improvement up to 1.76 BLEU points on Nl→It translation task. 6 Related Work Our work is related to two lines of research, and we describe each of them as follows: Model Compactness and Multi-NMT: To reduce the model size in NMT, weight pruning, knowledge distillation, quantization, and weight sharing (Kim and Rush, 2016; See et al., 2016; He et al., 2018; Zhou et al., 2018) have been ex1221 plored. Due to the benefit of compactness, multilingual translation has been extensively studied in Dong et al. (2015), Luong et al. (2016) and Johnson et al. (2017). Owing to excellent translation performance and ease of use, many researchers (Blackwood et al., 2018; Lakew et al., 2018) have conducted translation based on the framework of Johnson et al. (2017) and Ha et al. (2016). Zhou et al. (2019) propose to perform decoding in two translation directions synchronously, which can be applied on different target languages and is a new research area for Multi-NMT. In our method, we present a compact method for Multi-NMT, which can not only compress the model but also yield superior performance. Low-Resource and Zero-Shot NMT: Many researchers have explored low-resource NMT using transfer learning (Zoph et al., 2016; Neubig and Hu, 2018) and data augmenting (Sennrich et al., 2016a; Zhang and Zong, 2016) approaches. For zero-shot translation, Cheng et al. (2017) and Chen et al. (2017) utilize a pivot-based method, which bridges the gap between sourceto-pivot and pivot-to-target two steps. Multilingual translation is another direction to deal with both low-resource and zero-shot translation. Gu et al. (2018) enable sharing of lexical and sentence representation across multiple languages, especially for extremely low-resource Multi-NMT. Firat et al. (2016), Lakew et al. (2017), and Johnson et al. (2017) propose to make use of multilinguality in Multi-NMT to address the zero-shot problem. In this work, we propose a method for Multi-NMT to boost the accuracy of the multilingual translation, which better fits on both lowresource scenario and zero-shot scenario. 7 Conclusion In this paper, we have proposed a compact and language-sensitive method for multilingual translation. We first introduce a representor for replacing both encoder and decoder so as to fully explore the commonality among languages. Based on the representor architecture, we then propose three language-specific modules dealing with embedding, attention and language discrimination respectively, in order to enhance the multilanguage translation model with the ability of distinguishing among different languages. The empirical experiments demonstrate that our proposed methods can outperform strong standard multilingual translation systems on one-to-many and many-to-many translation tasks. Moreover, our method is proved to be especially helpful in the low-resource and zero-shot translation scenarios. Acknowledgments The research work descried in this paper has been supported by the National Key Research and Development Program of China under Grant No. 2016QY02D0303 and the Natural Science Foundation of China under Grant No. U1836221 and 61673380. The research work in this paper has also been supported by Beijing Advanced Innovation Center for Language Resources and Sogou Inc. We would like to thank Yang Zhao and Yuchen Liu for their invaluable discussions on this paper. References Antonios Anastasopoulos and David Chiang. 2018. Tied multitask learning for neural speech translation. In Proceedings of NAACL 2018, pages 82–91. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR 2015. Graeme Blackwood, Miguel Ballesteros, and Todd Ward. 2018. Multilingual neural machine translation with task-specific attention. In Proceedings of COLING 2018, pages 3112–3122. Yun Chen, Yong Cheng, Yang Liu, and Li Victor, O.K. 2017. A teacher-student framework for zeroresource neural machine translation. In Proceedings of ACL 2017, pages 1925–1935. Yong Cheng, Qian Yang, Yang Liu, Maosong Sun, and Wei Xu. 2017. Joint training for pivot-based neural machine translation. Proceedings of IJCAI 2017, pages 3974–3980. Kyunghyun Cho, Bart van Merri¨enboer Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of EMNLP 2014, pages 1724–1734. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of ACL 2015, pages 1723–1732. 1222 Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. In Proceedings of NAACL 2016, pages 866–875. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. arXiv preprint arXiv:1601.03317. Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor OK Li. 2018. Universal neural machine translation for extremely low resource languages. In Proceedings of NAACL 2018, pages 344–354. Thanh-Le Ha, Jan Niehues, and Alexander Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. In Proceedings of IWSLT 2016. Tianyu He, Xu Tan, Yingce Xia, Di He, Tao Qin, Zhibo Chen, and Tie-Yan Liu. 2018. Layer-wise coordination between encoder and decoder for neural machine translation. In Proceedings of NIPS 2018, pages 7944–7954. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of EMNLP 2013, pages 1700–1709. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of EMNLP 2016, pages 1317–1327. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Surafel M Lakew, ADG Mattia, and F Marcello. 2017. Multilingual neural machine translation for low resource languages. CLiC-it. Surafel Melaku Lakew, Mauro Cettolo, and Marcello Federico. 2018. A comparison of transformer and recurrent neural networks on multilingual neural machine translation. In Proceedings of COLING 2018, pages 641–652. Yichao Lu, Phillip Keung, Faisal Ladhak, Vikas Bhardwaj, Shaonan Zhang, and Jason Sun. 2018. A neural interlingua for multilingual machine translation. In Proceedings of WMT 2018, pages 84–92. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. In Proceedings of ICLR 2016. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP, pages 1412–1421. Graham Neubig and Junjie Hu. 2018. Rapid adaptation of neural machine translation to new languages. In Proceedings of EMNLP 2018, pages 875–880. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311–318. Emmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, and Tom Mitchell. 2018. Contextual parameter generation for universal neural machine translation. In Proceedings of EMNLP 2018, pages 425–435. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings of EACL 2017, pages 157–163. Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In Proceedings of ICASSP 2012. Abigail See, Minh-Thang Luong, and Christopher D. Manning. 2016. Compression of neural machine translation models via pruning. In Proceedings of SIGNLL 2016, pages 291–301. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of ACL 2016, pages 86–96. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of ACL 2016, pages 1715–1725. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS, pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, and Łukasz Kaiser. 2017. Attention is all you need. In Proceedings of NIPS, pages 30–34. Yining Wang, Jiajun Zhang, Feifei Zhai, Jingfang Xu, and Chengqing Zong. 2018. Three strategies to improve one-to-many multilingual translation. In Proceedings of EMNLP 2018, pages 2955–2960. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. 1223 Jiajun Zhang and Chengqing Zong. 2015. Deep neural networks in machine translation: An overview. IEEE Intelligent Systems, 30(5):16–25. Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In Proceedings of EMNLP, pages 1535– 1545. Long Zhou, Yuchen Liu, Jiajun Zhang, Chengqing Zong, and Guoping Huang. 2018. Languageindependent representor for neural machine translation. arXiv preprint arXiv:1811.00258. Long Zhou, Jiajun Zhang, and Chengqing Zong. 2019. Synchronous bidirectional neural machine translation. Transactions of the Association for Computational Linguistics, 7:91–105. Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In Proceedings of NAACL 2016, pages 30–34. Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of EMNLP 2016, pages 1568–1575.
2019
117
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1224–1234 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1224 Unsupervised Parallel Sentence Extraction with Parallel Segment Detection Helps Machine Translation Viktor Hangya and Alexander Fraser Center for Information and Language Processing LMU Munich, Germany {hangyav, fraser}@cis.lmu.de Abstract Mining parallel sentences from comparable corpora is important. Most previous work relies on supervised systems, which are trained on parallel data, thus their applicability is problematic in low-resource scenarios. Recent developments in building unsupervised bilingual word embeddings made it possible to mine parallel sentences based on cosine similarities of source and target language words. We show that relying only on this information is not enough, since sentences often have similar words but different meanings. We detect continuous parallel segments in sentence pair candidates and rely on them when mining parallel sentences. We show better mining accuracy on three language pairs in a standard shared task on artificial data. We also provide the first experiments showing that parallel sentences mined from real life sources improve unsupervised MT. Our code is available, we hope it will be used to support low-resource MT research. 1 Introduction The performance of machine translation has improved significantly recently, with some claims of even being close to human parity (Hassan et al., 2018), but a large amount of parallel data is required for high quality systems. For many language pairs the size of the available training data is not adequate. Recently, developments in the field of unsupervised bilingual word embeddings (BWEs) made it possible to build MT systems without any parallel data. Both statistical (Lample et al., 2018b; Artetxe et al., 2018b) and neural (Artetxe et al., 2018c; Lample et al., 2018a) MT approaches were proposed which are promising directions to overcome the data sparsity problem. However, various issues of the approaches still have to be solved, e.g., better word reordering during translation or tuning system parameters. For many interesting low resource language pairs, we do not have enough parallel data, but we do have access to sources of comparable monolingual text. In this paper we propose a strong unsupervised system for parallel sentence mining and show that the mined data improves the performance of unsupervised MT systems. Previously many approaches tackled the problem of parallel sentence extraction but they were relying on different levels of bilingual signals either to build dictionaries (Grover and Mitra, 2017), parallel sentence classifiers (Bouamor and Sajjad, 2018) or bilingual sentence representations (Schwenk, 2018). An unsupervised system was also proposed which only relied on unsupervised BWEs, thus no additional resources are needed (Hangya et al., 2018). We use this approach as our baseline and show that relying only on word similarity information leads to false positive sentence pairs, such as in this example: • The US dollar has a considerable role in the international monetary system. • Die Rolle des US Dollar im internationalen Geldsystem sollte neu ¨uberdacht werden. (The role of the US dollar in the international monetary system should be reconsidered.) Both sentences mention the role of the US dollar in the international monetary system, but the overall claim is different. One major disadvantage of the approach of (Hangya et al., 2018) is that, by only relying on word similarities, sentence pairs which have similar meanings but are not exactly parallel are often mined. We overcome this problem by detecting continuous parallel segments in the candidate sentence pairs. We align similar words in the candidate sentence pairs, instead of just averaging their similarity, and use the alignments in order to detect continuous sub-sentential segments on both 1225 sides that are aligned with each other. In order to increase the precision of our system we only mine similar sentence pairs where the detected parallel segments form a large part of the full sentence pairs thus overcoming the problem of only nearly parallel sentence pairs mentioned above. We conduct two sets of experiments to show that our system mines more useful parallel sentences and that they are beneficial for MT systems. First, we evaluate the accuracy of the mining approach on the BUCC 2017 shared task data (Zweigenbaum et al., 2017). We show that by looking for continuous parallel segments we can increase the performance significantly compared to (Hangya et al., 2018), especially the precision of the system, on German-, French- and RussianEnglish language pairs.1 Second, since the data used in previous work was artificially assembled, we use real life German and English monolingual news crawl data to mine parallel sentences, and use them to improve an unsupervised neural MT system by using the extracted data as silverstandard parallel training data. We show for the first time that exploiting comparable monolingual text sources with an unsupervised parallel sentence mining system helps unsupervised MT. Furthermore, we achieve increased performance compared with the previous unsupervised mining system. 2 Related Work Most previous systems addressing parallel sentence extraction depend on bilingual resources which makes their applicability problematic in low-resource scenarios. Munteanu et al. (2004) used a bilingual dictionary and a small number of parallel sentences to train a maximum entropy classifier for mining Arabic and English parallel sentences. Similarly, parallel data was used to train IBM Model 1 and a maximum entropy classifier (Smith et al., 2010). Munteanu and Marcu (2006) extracted parallel sub-sentential segments from partly parallel sentences and used them to improve a statistical MT system. We follow this idea in our work and detect continuous parallel segments in order to weight the similarity values of candidate sentence pairs. To further promote the task, the BUCC 2017 shared task – Identifying parallel sentences in comparable corpora 1Chinese-English is left for future work, as a study of unsupervised Chinese word segmentation approaches is needed. – was organized, where parallel sentences were automatically inserted into two monolingual corpora to produce gold standard train and test data in order to measure the performance of participating systems (Zweigenbaum et al., 2017). Since then, various neural architectures were proposed. Bilingual word embeddings were used in (Grover and Mitra, 2017), neural sentence pair classifiers were used in (Bouamor and Sajjad, 2018) and bilingual sentence representations were trained in (Schwenk, 2018). The disadvantage of the mentioned methods is that they need a bilingual signal to be trained, in contrast with our approach which only uses monolingual data. A fully unsupervised system was proposed in (Hangya et al., 2018) but the system introduced too much noise by mining sentence pairs with similar words but different meaning. Also, the usefulness of the system in downstream tasks was not tested. Our approach is based on BWEs where representations of source and target language words are in the same bilingual space. Previous approaches building BWEs were using bilingual signals of various granularity. Following Mikolov et al. (2013), many authors map monolingual word embeddings into the same bilingual space (Faruqui and Dyer, 2014; Xing et al., 2015), others leverage parallel texts (Gouws et al., 2015) or create artificial cross-lingual corpora using seed lexicons or document alignments (Vuli´c and Moens, 2015; Duong et al., 2016) to train BWEs. Several authors have shown that good quality BWEs can be trained by mapping monolingual spaces without any bilingual signal. Conneau et al. (2018) used adversarial training to rotate the source space to match the target and extracted an initial lexicon to fine tune the mapping. Others used word neighborhood information to create an initial mapping (Artetxe et al., 2018a; Alvarez-Melis and Jaakkola, 2018). We use the work of Conneau et al. (2018) to build BWEs for parallel sentence extraction. The development of unsupervised BWEs opened the door to creating machine translation systems without any parallel data. Unsupervised BWEs are used to make initial word-by-word translating systems which are then improved by iterative back-translation (Sennrich et al., 2016) using neural systems (Lample et al., 2018a; Artetxe et al., 2018c; Yang et al., 2018). It is also possible to initialize phrase tables for statistical MT systems and increase their performance with the 1226 same back-translation techniques (Lample et al., 2018b; Artetxe et al., 2018b). Although the initial results are promising, there are many issues still to be solved. In our experiments we use the NMT system of (Artetxe et al., 2018c). We show that the addition of our mined parallel data improves performance over baseline results. 3 Approach Our approach for mining parallel sentences is based on calculating the similarity of sentence pair candidates. To avoid mining pairs having similar words but different meaning we look for continuous parallel segments in the candidates based on word alignments. We use the length of the segments to either filter the candidate out or to weight the averaged similarity scores of words to get the final score of a given candidate. 3.1 Word Similarity The first step of our method is to define the similarity of words. For this we use BWEs, where source and target language words are embedded in the same vector space. First, we build monolingual word embeddings and map the source words into the target space. Initially, a seed lexicon of source and target language words was needed to learn a mapping between the two spaces (Mikolov et al., 2013). Conneau et al. (2018) showed that good quality BWEs can be produced without any bilingual signal, by using an adversarial system to learn an initial mapping of the two spaces and mine frequent source words and their most similar pairs from the target language to form an initial seed lexicon. Using this initial lexicon the mapping can be further tuned using orthogonal mapping (Xing et al., 2015). We use the system of Conneau et al. (2018) to build unsupervised BWEs. To measure similarity of words we use the cosine similarity based Cross-Domain Similarity Local Scaling (CSLS) metric (Conneau et al., 2018) which aims to overcome the hubness problem of high dimensional spaces (Dinu et al., 2015). In short, this metric adjusts the similarity values of a word based on the density of the area where it lies, i.e., it increases similarity values for a word lying in a sparse area and decreases values for a word in a dense area. We create a dictionary of the 100 nearest target words for each source language word with their similarities using CSLS. Even though good quality dictionaries can be built based on BWEs, the translations of some words, such as named entities and rare words, can be improved using orthographic information (Braune et al., 2018; Riley and Gildea, 2018). We follow the approach of Braune et al. (2018) and create a dictionary similar to the dictionary in the previous paragraph but using orthographic similarity of words, i.e., one minus normalized Levenshtein distance, instead of CSLS. We then merge the two dictionaries to get the final set of similar word pairs by taking all target words from both dictionaries for each source language word2. To build monolingual embeddings we use fastText’s skipgram model (Bojanowski et al., 2017) with dimension size 300 and keeping all other parameters default3. We use MUSE as the implementation of (Conneau et al., 2018) with default parameters4 for building unsupervised BWEs. 3.2 Parallel Segment Detection The next step of our approach is to calculate the similarities of sentence pair candidates using the dictionaries created above. Various algorithms were proposed to measure sentence similarities, such as the Hungarian alignment (Kuhn, 1955; Varga et al., 2007) and the Word Mover’s Distance (Kusner et al., 2015). On the other hand, these methods are computationally expensive for parallel sentence extraction where the number of sentence pair candidates is huge. Due to performance considerations Hangya et al. (2018) proposed a fast word similarity based method to calculate sentence similarity by averaging the scores of the most similar words. The disadvantage of relying only on similar words is that non-parallel candidates having similar words are often wrongly mined, as already discussed. To overcome this problem, we align words in the candidate sentence pairs in order to detect parallel segments similarly to Munteanu and Marcu (2006). Our hypothesis is that such continuous segments are more related, thus candidates having long enough segments are parallel. Our algorithm is illustrated in Figure 1. We iterate over the source sentences from left to right and greedily align each source word to the most similar target word that was not already aligned. We note that source words can be left unaligned if 2If a translation is in both dictionaries we take the max of the values. 3See the Facebook Research fastText GitHub page. 4See the Facebook Research MUSE GitHub page. 1227 In the next couple of weeks it will all be over 0 0.2 0.4 0.6 0.8 1 align. score avg. score threshold avg. window In den n¨achsten Wochen kann das noch schlimmer sein Figure 1: The figure depicts our algorithm for parallel segment detection on a non-parallel sentence pair. The aligned words and their scores are shown together with the smoothed values using average filtering of window size 5. Detected segments with respect to 0.3 threshold value are bolded on both source (En) and target (De) sides. Averaged scores on the target side are calculated based on target sentence word order which is used for target segment detection (we omit this part of the diagram). To decide if the pair is parallel we average word alignment scores of the full source sentence, weight it using the length of the detected segment and check if it reaches a given threshold. Translation of the target sentence: In the next weeks this can be even worse. none of the possible target words are in the used dictionary entry for that word. Similarly, target words could be unaligned as well. We assign an alignment score for each position of the source and target sentences respectively. The alignment score for a word at position i is its similarity score to its aligned word (taken from the dictionary used) or 0 if the word is unaligned. We then look for continuous segments on both source and target sides by looking for sequences of indices where the alignment scores are higher then a given threshold value. Since the use of mostly function words could vary across languages, e.g., En: in the international vs. De: im (in+dem) internationalen, these words often remain unaligned resulting in gaps in the sequences, and so fragmented parallel segments are formed. To allow a small number of unaligned words in the extracted segments we apply an average filter on the alignment score sequences with a predefined window size at each position giving a smoothed alignment value. After extracting segments from both sides of a candidate pair, we align source and target side segments by matching those which have the most word alignments between each other. The number of segments could be unbalanced on the two sides thus we ignore segments which are not aligned with segments on the other side. Furthermore, we filter segments by dropping all segment pairs if i) either side is shorter than a given threshold and if ii) the length difference of the pair is larger than 5 tokens. We note that our algorithm at this point can be used to mine parallel segments from sentence pairs. However, our focus in this paper is to mine complete sentence pairs which we describe in the following. 3.3 Parallel Sentence Mining To acquire the final similarity score for a candidate sentence pair we use both word alignment scores and the detected segments. If no parallel segment is detected or remains after the filtering steps we consider the candidate as non-parallel, i.e., set its similarity score to 0. Otherwise, we average word alignment scores of the full sentence and weight it with the ratio between the length of the longest source segment and that of the full sentence. This way if a candidate pair has highly similar words but has unparallel parts we decrease its overall similarity. We consider a candidate pair as parallel if its score is larger than a given threshold value. We note, that we only use the longest segment in order to reach high precision. It is possible that the segments are fragmented in parallel sentence pairs separated by short non-parallel phrases, resulting in false negatives. On the other hand, using the sum of the length of all segments could lead to false positives. Thus, we only rely on the longest segment and use the size parameter of the average filter to balance the fragmentation. We detail the 1228 used parameters for each experiment in the following sections. We applied pre-filtering of candidates due to the large number of possible sentence pairs. Following Gr´egoire and Langlais (2017), we only consider the 100 most similar target sentences for each source sentence as candidates. We calculate sentence similarity by embedding them using averaged word vectors and measuring their cosine distance which can be run efficiently using GPUs even on large datasets (Johnson et al., 2017). 4 Evaluation on BUCC 2017 We conduct our first set of experiments on the BUCC 2017 shared task data (Zweigenbaum et al., 2017). The aim of this shared task is to quantitatively evaluate methods for extracting parallel sentences from comparable monolingual corpora. Train, development and test datasets were built for 4 language pairs German-, French-, Russianand Chinese-English language pairs. The data was built automatically by inserting parallel news commentary sentences into monolingual wikipedia dumps. To make sure that the insertions are not easy to detect parallel sentences were only inserted if other strongly related sentences in terms of their topic are present in the monolingual corpus. We use the system of (Hangya et al., 2018) as our baseline and run experiments on the first three language pairs (as we already mentioned, we would need to study Chinese unsupervised word segmentation to run Zh-En experiments). We consider English as the target language in all cases. 4.1 Evaluation Setup Following the data selection and preprocessing steps of the baseline we use monolingual news crawls, downloaded between 2011 and 2014 taken from the WMT 2014 shared task (Bojar et al., 2014), for building the initial monolingual word embeddings. We tuned our system parameters using the development data on all language pairs. We performed tuning in the following intervals: threshold value for segment detection 0.2 −0.4; window size of average filter 5 −20; threshold value for deciding parallelism 0.1 −0.6; minimum segment length 20% −50% of the original sentence. We note that for the experiments in this section we kept the minimum segment length low in order not to filter out candidates aggressively but to decrease their scores instead. This way canP (%) R (%) F1 (%) De-En avg 23.71 44.57 30.96 align-static 44.63 41.13 42.81 align-dyn 48.53 39.18 43.35 Fr-En avg 39.02 52.61 44.81 align-static 43.20 41.27 42.21 align-dyn 50.51 38.11 43.44 Ru-En avg 16.75 24.20 19.80 align-static 25.85 23.33 24.53 align-dyn 37.44 18.73 24.97 Table 1: Precision, recall and F1 scores for our proposed system and the baseline (avg) on the BUCC 2017 dataset. didates with short segments could still be mined. In Section 5 we will use a higher value to favor precision over recall. Besides using a static value for deciding parallelism we also used the dynamic thresholding proposed in (Hangya et al., 2018): th = ¯S + λ ∗std(S) (1) where S is a set containing the similarity values of each source sentence in the test set and its most similar target candidate, ¯S and std(S) are its mean and standard deviation. We performed a less intensive tuning of λ as suggested. As in previous work, we evaluate our system on the training set of the shared task since the official test set is undisclosed. We do not use the train set to either train or tune our system. 4.2 Results We show precision, recall and F1 scores in Table 1 for the three language pairs. In addition to the baseline (avg) system, which only relies on averaged word similarity scores, we show the performance of our proposed system with static and dynamic thresholding. Our system achieved a significant increase of F1 for German- and Russian-English language pairs. For both pairs we achieved a large increase of precision, especially in the case of German-English where the improvement is over 20%. On the other hand, we experienced a slight drop of recall due to our stricter approach for the mining process. For the FrenchEnglish language pair the F1 score has decreased slightly. It can be seen that the precision of the system was significantly increased for this language pair as well, proving that we extract less pairs which are similar but not parallel. In contrast, 1229 1. Benchmarking-Ergebnisse werden u.a. im Global Competitiveness Report des World Economic Forum ver¨offentlicht. Benchmarking results are published among others in the World Economic Forum’s Global Competitiveness Report. These ratios are compiled and published by the World Economic Forum. 2. Ende 1994 gelang es dem afghanischen Verteidigungsminister Ahmad Schah Massoud, Hekmatyr und die verschiedenen Milizen milit¨arisch in Kabul zu besiegen. At the end of 1994, Afghan defense minister Ahmad Shah Massoud succeeded in defeating Hekmatyr and the various militias in Kabul. In late 1994, Rabbani’s defense minister, Ahmad Shah Massoud defeated Hekmatyr in Kabul and ended ongoing bombardment of the capital. 3. Die 20 gr¨oßten St¨adte der Welt sind, bis auf drei Ausnahmen, in Schwellenl¨andern zu finden. The 20 largest cities in the world, with three exceptions, can be found in emerging markets. Indeed, all but three of the worlds 20 largest cities are in emerging markets. Table 2: German-English examples with translations of German sentences shown in italic. Examples 1 and 2 are false positives of the baseline but not our proposed system while example 3 is a false negative of our approach. our conservative approach also misses true parallel pairs resulting in a significant drop in recall. However, we argue that precision is more important for downstream tasks, since noise in the data often hurts performance. Based on non-mined parallel examples we found that French segments tend to be more fragmented compared to other languages which leads to a stronger decrease in the sentence pair similarity scores. One solution to the problem could be to use a larger window size when detecting parallel segments. Using static and dynamically calculated threshold values performs comparably. It can be seen that dynamic thresholding achieved higher precision but lower recall when compared with the static value. Furthermore, the increase of precision is higher than the decrease of recall, resulting in better F1 scores as well. In the baseline dynamic thresholding was needed due to the system’s sensitiveness to the threshold value. In contrast, for our system there is a bigger gap between similarity scores of parallel and non-parallel sentence pairs due to segment length based weighting, so for this reason the tuned static value worked well on the test set. We manually analyzed German-English examples to highlight the differences of our system and the baseline. We show samples in Table 2 where 1 and 2 are falsely mined by the baseline while 3 is missed by our proposed system. Although example 1 seems parallel, there is some additional information on the source side. Since the words are similar, the baseline system incorrectly mines this pair. On the other hand, our approach ignores it because the detected segment is only Competitiveness Report des World Economic Forum ver¨offentlicht, while the words in the beginning do not form a continuous segment thus decreasing its overall score aggressively. Similarly, example 2 has different content at the end of the sentence pair which makes the detected segment short even though there are similar words in the pair. Example 3 is a parallel sentence pair which was missed by our system but not by the baseline. The reason lies in the wording of a short segment in the sentences. The source side phrase bis auf drei Ausnahmen (with three exceptions) is expressed as all but three on the target side. This difference results in two shorter segments (die 20 gr¨oßten St¨adte der Welt and in Schwellenl¨andern zu finden) in the sentence which decreases the similarity score below the threshold. Such false negatives occurred when a short non-parallel segment divides a longer parallel segment which could be solved by either using larger window size for the average filter or by merging segments if they are a few tokens away from each other. On the other hand, this could also introduce false positives. In general, we can conclude that we improved F1 score significantly, except for French-English where the baseline performed only a couple of percentage points better. Furthermore, our method achieved the highest precision, out-performing the baseline in all three language pairs, which is more important when mining from the web (Xu and Koehn, 2017). 5 Improving Unsupervised MT Since, parallel sentence mining is mostly important for downstream tasks such as low resource machine translation, we now show that mined sentences improve MT performance, which was not 1230 shown before. In this section we mine parallel data from real life data sources and use the extracted sentences to improve the performance of unsupervised MT. For this we simulate a lowresource setup for the German-English language pair similarly to previous work on unsupervised MT (Artetxe et al., 2018c; Lample et al., 2018b). 5.1 Evaluation Setup To mine parallel sentence pairs we use comparable monolingual data for both German and English. For this we use the news crawl data between 2007 and 2015 released by the WMT 2016 translation shared task (Bojar et al., 2016) containing about 140M and 114M German and English sentences respectively after length based filtering (see below). As a first step, we build unsupervised BWEs on the same data as (Artetxe et al., 2018c), i.e., newscrawl between 2007 and 2013, using the same procedure mentioned earlier. The built BWEs are used to create the dictionary of word similarities for the mining and to initialize the NMT system. We consider German as the source language during the mining process. Before running our system on the full data to extract sentences we batch the data to decrease the number of sentence pair candidates. Assuming that different news portals cover a given event in the same year we only look for parallel sentences within the same year. We note that further use of batching could be possible if more fine grained date information is available. Furthermore, we also batch texts based on their length assuming that sentences with very different number of tokens are not parallel. We use sentences with length between 10 and 50 tokens and make batches with step size 5. We also apply pre-filtering within the batches. This method drastically decreased the runtime of the mining procedure which took around 1 week using 40 threads on a 2.27GHz CPU. Since tuning would have been time consuming, we based our hyperparameters on the experiments in the previous section and on preliminary experiments. In order to increase the precision of mined sentences we chose an aggressive setup for window size and minimum segment length, requiring long continuous segments in the sentences. We made the following choices: threshold value for segment detection 0.3; window size of average filter 5; threshold value for deciding parallelism 0.3; minimum segment length 70%. At the end we extracted around 220K parallel sentence pairs from the full dataset. 5.2 Machine Translation System As the unsupervised MT system we use the neural approach proposed by Artetxe et al. (2018c). The system is based on unsupervised BWEs as the initial bilingual signal connecting the source and target languages. The system mostly follows the standard encoder-decoder architecture using RNN layers and attention mechanism (Bahdanau et al., 2014). One difference compared to the standard architecture is its dual structure. In contrast to general NMT systems which are usually built for a specific translation direction, the system is capable of performing both source→target and target→source translation. This is achieved by having a shared encoder for both languages which encodes source and target sentences similarly. The encoders of the system are initialized with the pretrained BWEs which are kept fixed during training. On top of the shared encoders separate decoders generate the translation of the input for each language using the encoder’s output. Training is performed in an iterative manner where each iteration consists of a denoising and an on-the-fly backtranslation step. The goal of the denoising step is to learn good quality representations of both source and target sentences in the encoder and to learn how to decode these representations. Since parallel data is not available, this process is done monolingually, i.e., encoding the input and decoding to the original language, similarly to auto encoding. In order to prevent simple copying of words, a random noise is applied on the input sentences and the task is to denoise the input. To tie source and target representations more strongly backtranslation is also performed at each iteration (Sennrich et al., 2016), and synthetic parallel data is generated, by translating sentences to the other language using the system’s current parameters, and then running a training step using the backtranslation as input to predict the original sentence. To incorporate the mined parallel sentences we used them during the iterative process. At each iteration on top of the denoising and backtranslation steps we also run a training step on the mined parallel sentences in both source→target and target→source directions to train model pa1231 unsup 07-13 all 07-13 long 07-15 all 07-15 long europarl avg align avg align avg align avg align WMT14 de-en 10.35 10.47 11.26 10.77 11.56 10.59 11.79 11.05 11.20 14.14 en-de 6.30 6.23 6.91 5.14 6.82 6.55 7.26 6.16 6.78 8.96 WMT16 de-en 13.07 13.35 14.35 14.09 14.95 12.99 15.39 14.16 14.29 18.06 en-de 8.59 8.72 9.69 7.10 10.01 8.92 10.23 8.62 9.79 12.66 Table 3: NMT experiments using mined parallel sentences. We compare results using mined sentence pairs from Hangya et al. (2018) and our approach. Texts before 2014 is used in 07-13 while all data is used in 07-15. We also restrict the minimum sentence length to 16 tokens in case of long. We show a fully unsupervised system using no parallel sentences, and an oracle using europarl parallel sentences. all long avg mined from 07-13 3,945,931 2,626,599 mined from 07-15 10,651,736 6,858,384 align mined from 07-13 90,707 8,358 mined from 07-15 218,126 16,677 europarl 218,126 — Table 4: Number of parallel sentence pairs in the datasets. rameters. We use words as tokens in our experiments (but we note that byte-pair encoding was slightly better in (Artetxe et al., 2018c)). 5.3 Results We evaluate MT experiments on the WMT14 and WMT16 test sets and present BLEU scores with the neural MT system in Table 3. We compare our approach (using dynamic thresholding) with two baseline systems. We rerun5 the setup presented in (Artetxe et al., 2018c) without any mined parallel data (unsup). In addition, we use the system of (Hangya et al., 2018) with dynamic thresholding to mine parallel sentences (avg). We ran multiple sets of experiments by splitting the mined data along two dimensions. We used sentences before 2014 only in lines 07-13 in order to use data that are from the past when evaluating on the WMT14 test set. All the data was used in 07-15. Furthermore, looking at the mined data we noticed that shorter sentences tend to be more noisy. For this reason, we only used sentences that are at least 16 tokens long in long. As an oracle experiment, we used true parallel sentences from europarl by randomly sampling the same amount as the overall mined pairs to give a theoretic upper bound of the results with the used NMT system. The exact number of sentence pairs in each dataset used is 5Original results were shown only on WMT14 which are comparable to our BLEU scores. shown in Table 4. Based on the scores in Table 3 it can be seen that by using mined sentences we achieved a significant performance increase compared to the unsupervised baseline. Our system outperformed the avg baseline as well in all setups. Furthermore, our approach achieved improvements compared to the unsupervised system in all cases while the avg baseline approach achieved negative results as well. Based on Table 4 avg mines significantly more sentence pairs compared to our proposed approach, which contains noise leading to performance degradation. This result supports the claim of our work, i.e., relying only on word similarities can lead to the mining of sentence pairs which have similar meanings but are not exactly parallel. For all test sets best results were achieved using all mined data by our system. Looking at the effect of length filtering it can be seen that this step helped when mining from 07-13 but not when using data from all years. From this we conclude, that if there are only a smaller number of parallel sentences better quality is important but quantity suppresses a small amount of noise in the 07-15 setup. Comparing scores on WMT14 with and without data from the same year and the future no clear difference can be seen. Furthermore, the BLEU score differences between the time intervals on WMT14 strongly follows that on WMT16 where all of the sentences are from the past. From this we conclude that the unsupervised MT system generalizes well using older data. Using true parallel data from europarl achieved even higher results. The reason for this is that the majority of the mined sentences are short and more noisy. Based on this, one possible future improvement could be to use more aggressive parameters when mining from short sentences while using more permissive parameters to mine longer sentences. 1232 source Wenn Justin Bieber einen Kaffee trinkt, staunt man an der Fensterscheibe. reference When Justin Bieber drinks coffee people goggle through the window. unsup If Justin Timberlake ate a coffee, you buzzing to the window. 07-15 all If Justin Bieber drank a coffee, you wonder at the window. source Etwa die H¨alfte der demokratischen W¨ahler der Vorwahlen landesweit sagen, dass sie mit Begeisterung Clinton unterst¨utzen w¨urden, wenn sie von der Partei nominiert w¨urde. reference About half of Democratic primary voters nationwide say they would enthusiastically support Clinton if she became the party’s nominee. unsup Roughly half of the pro-election voters nationwide voters say they would support Obama’s support with Clinton if they would be nominated by the party. 07-15 all About half of the Democratic primary voters nationwide say that they would support Clinton with enthusiasm if they would be nominated by the party. source und sagte, er habe auf jemand geschossen und jemand get¨otet reference and said he had shot and killed someone unsup and he said he had been shot on someone and killed 07-15 all and he said he had shot and killed someone Table 5: Example translations comparing the unsupervised baseline with adding mined parallel sentences on WMT16. We manually analyzed the translations given by the unsupervised baseline system and the setup when we used all the sentence pairs mined by our approach on WMT16. We show examples depicting differences in Table 5. One aspect where the added parallel sentences clearly helped is the handling of named entities. As the first and second examples show, the baseline system often mixes up names which is due to their similar representations in BWE space. By adding parallel data the system could learn to match the source and target side representations of a given entity, i.e., copy the correct word to the translation. We also found that the fluency of translations is also improved which is demonstrated by the second and third examples. The second example shows an important weakness of the baseline, which is that it tends to be redundant, e.g., by mentioning voters and support twice. In addition, it mentions US presidency related entities twice, once as Clinton and once confusing it with Obama. On the other hand, by using parallel sentences the results are more fluent and accurate. While the meaning of the third example was correctly translated, the wording used by the baseline is unnatural in contrast to the 07-15 all setup. 6 Conclusions Parallel sentence extraction is important for providing an additional bilingual signal for many downstream tasks in low resource setups. Most previous work tackled this problem using supervised techniques which made their applicability problematic. In this work, we proposed a fully unsupervised system for parallel sentence extraction. We showed that a previous unsupervised system, which only relies on word similarity in source and target language sentences, often mines false positives because not all sentences having similar words are parallel. To overcome this problem we introduced the detection of continuous parallel segments based on word alignments. We filter candidates having too short segments and weight the similarity score of the rest based on segment lengths. We showed that using our method better performance could be achieved on the BUCC 2017 parallel sentence extraction task compared to previous work. In contrast to previous unsupervised work, we also extracted sentences from real world comparable corpora and showed better translation performance when using these sentence pairs, opening up new possibilities for using small amounts of parallel data in purely unsupervised MT approaches. Our analysis showed that both handling of named entities and the fluency of sentences improved. We publicly release our system6 to support MT communities especially for low-resource setups. Acknowledgments We would like to thank the anonymous reviewers for their valuable input. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement №640550). 6https://github.com/hangyav/UnsupPSE 1233 References David Alvarez-Melis and Tommi Jaakkola. 2018. Gromov-Wasserstein Alignment of Word Embedding Spaces. In Proc. EMNLP. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proc. ACL. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. Unsupervised Statistical Machine Translation. In Proc. EMNLP. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018c. Unsupervised Neural Machine Translation. In Proc. ICLR. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. In Proc. ICLR. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5. Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleˇs Tamchyna. 2014. Findings of the 2014 Workshop on Statistical Machine Translation. In Proc. 9th Workshop on Statistical Machine Translation. Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, et al. 2016. Findings of the 2016 conference on machine translation. In Proc. Conference on Machine Translation. Houda Bouamor and Hassan Sajjad. 2018. H2@ BUCC18: Parallel Sentence Extraction from Comparable Corpora Using Multilingual Sentence Embeddings. In Proc. BUCC. Fabienne Braune, Viktor Hangya, Tobias Eder, and Alexander Fraser. 2018. Evaluating bilingual word embeddings on the long tail. In Proc. NAACL-HLT. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word Translation Without Parallel Data. In Proc. ICLR. Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving Zero-Shot Learning by Mitigating the Hubness Problem. In Proc. workshop track at ICLR. Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2016. Learning crosslingual word embeddings without bilingual corpora. In Proc. EMNLP. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proc. EACL. Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed representations without word alignments. In Proc. ICML. Francis Gr´egoire and Philippe Langlais. 2017. BUCC 2017 Shared Task: a First Attempt Toward a Deep Learning Framework for Identifying Parallel Sentences in Comparable Corpora. In Proc. BUCC. Jeenu Grover and Pabitra Mitra. 2017. Bilingual Word Embeddings with Bucketed CNN for Parallel Sentence Extraction. In Proc. ACL, Student Research Workshop. Viktor Hangya, Fabienne Braune, Yuliya Kalasouskaya, and Alexander Fraser. 2018. Unsupervised Parallel Sentence Extraction from Comparable Corpora. In Proc. IWSLT. Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-yan Liu, Renqian Luo, Arul Menezes, Tao Qin, and Microsoft Ai. 2018. Achieving Human Parity on Automatic Chinese to English News Translation. arXiv:1803.05567. Jeff Johnson, Matthijs Douze, and Herv´e J´egou. 2017. Billion-scale similarity search with GPUs. CoRR, abs/1702.08734. Harold W Kuhn. 1955. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2). Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From Word Embeddings to Document Distances. In Proc. ICML. Guillaume Lample, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised Machine Translation Using Monolingual Corpora Only. In Proc. ICLR. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018b. Phrase-Based & Neural Unsupervised Machine Translation. In Proc. EMNLP. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168. Dragos Stefan Munteanu, Alexander Fraser, and Daniel Marcu. 2004. Improved machine translation performance via parallel sentence extraction from comparable corpora. In Proc. NAACL-HLT. Dragos Stefan Munteanu and Daniel Marcu. 2006. Extracting parallel sub-sentential fragments from nonparallel corpora. In Proc. ACL. 1234 Parker Riley and Daniel Gildea. 2018. Orthographic Features for Bilingual Lexicon Induction. In Proc. ACL. Holger Schwenk. 2018. Filtering and Mining Parallel Data in a Joint Multilingual Space. In Proc. ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proc. ACL. Jason R. Smith, Chris Quirk, and Kristina Toutanova. 2010. Extracting Parallel Sentences from Comparable Corpora using Document Level Alignment. In Proc. NAACL-HLT. D´aniel Varga, P´eter Hal´acsy, Andr´as Kornai, Viktor Nagy, L´aszl´o N´emeth, and Viktor Tr´on. 2007. Parallel corpora for medium density languages. Amsterdam Studies In The Theory And History Of Linguistic Science Series 4. Ivan Vuli´c and Marie-Francine Moens. 2015. Bilingual word embeddings from non-parallel documentaligned data applied to bilingual lexicon induction. In Proc. ACL. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized Word Embedding and Orthogonal Transform for Bilingual Word Translation. In Proc. NAACL-HLT. Hainan Xu and Philipp Koehn. 2017. Zipporah: a fast and scalable data cleaning system for noisy webcrawled parallel corpora. In Proc. EMNLP. Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Unsupervised Neural Machine Translation with Weight Sharing. In Proc. ACL. Pierre Zweigenbaum, Serge Sharoff, and Reinhard Rapp. 2017. Overview of the Second BUCC Shared Task: Spotting Parallel Sentences in Comparable Corpora. In Proc. BUCC.
2019
118
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1235–1245 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1235 Unsupervised Bilingual Word Embedding Agreement for Unsupervised Neural Machine Translation Haipeng Sun1∗, Rui Wang2, Kehai Chen2, Masao Utiyama2, Eiichiro Sumita2, and Tiejun Zhao1 1Harbin Institute of Technology, Harbin, China 2National Institute of Information and Communications Technology (NICT), Kyoto, Japan [email protected], [email protected] {wangrui, khchen, mutiyama, eiichiro.sumita}@nict.go.jp Abstract Unsupervised bilingual word embedding (UBWE), together with other technologies such as back-translation and denoising, has helped unsupervised neural machine translation (UNMT) achieve remarkable results in several language pairs. In previous methods, UBWE is first trained using nonparallel monolingual corpora and then this pre-trained UBWE is used to initialize the word embedding in the encoder and decoder of UNMT. That is, the training of UBWE and UNMT are separate. In this paper, we first empirically investigate the relationship between UBWE and UNMT. The empirical findings show that the performance of UNMT is significantly affected by the performance of UBWE. Thus, we propose two methods that train UNMT with UBWE agreement. Empirical results on several language pairs show that the proposed methods significantly outperform conventional UNMT. 1 Introduction Since 2013, neural network based bilingual word embedding (BWE) has been applied to several natural language processing tasks (Mikolov et al., 2013; Faruqui and Dyer, 2014; Xing et al., 2015; Dinu et al., 2015; Lu et al., 2015; Wang et al., 2016; Artetxe et al., 2016; Smith et al., 2017; Wang et al., 2018). Recently, researchers have found that supervision is not always necessary (Cao et al., 2016; Zhang et al., 2017). Several unsupervised BWE (UBWE) methods (Conneau et al., 2018; Artetxe et al., 2018a) have been proposed and these have achieved impressive performance in wordtranslation tasks. The success of UBWE makes unsupervised neural machine translation (UNMT) possible. The combination of UBWE with denoising autoencoder and back-translation has ∗Haipeng Sun was an internship research fellow at NICT when conducting this work. led to UNMT that relies solely on monolingual corpora, with remarkable results reported for several language pairs such as English-French and English-German (Artetxe et al., 2018c; Lample et al., 2018a). In previous methods, UBWE is first trained using non-parallel monolingual corpora. This pretrained UBWE is then used to initialize the word embedding in the encoder and decoder of UNMT. That is, the training of UBWE and UNMT take place in separate steps. In this paper, we first empirically investigate the relationship between UBWE and UNMT. Our empirical results show that: • 1) There is a positive correlation between the quality of the pre-trained UBWE and the performance of UNMT. • 2) The UBWE quality significantly decreases during UNMT training. Based on these two findings, we hypothesize that the learning of UNMT with UBWE agreement would enhance UNMT performance. In detail, we propose two approaches, UBWE agreement regularization and UBWE adversarial training, to maintain the quality of UBWE during NMT training. Empirical results on several language pairs show that the proposed methods significantly outperform the original UNMT. The remainder of this paper is organized as follows. In Section 2, we introduce the background of UNMT. The results of preliminary experiments are presented and analyzed in Section 3. In Section 4, we propose methods to jointly train UNMT with UBWE agreement. In Sections 5 and 6 , we describe experiments to evaluate the performance of our approach and analyze the results. Section 7 introduces some related work and Section 8 concludes the paper. 1236 2 Background of UNMT There are three primary components of UNMT: UBWE initialization, denoising auto-encoder, and back-translation. Consider a sentence X in language L1 and a sentence Y in another language L2. The data spaces of the L1 sentence X and the L2 sentence Y are denoted by φL1 and φL2, respectively. After initialization by UBWE, the encoders and decoders of L1, L2 are trained through denoising and back-translation. The objective function Lall of the entire UNMT model would be optimized as: Lall = Lauto + Lbt, (1) where Lauto is the objective function for autodenoising, and Lbt is the objective function for back-translation. 2.1 Bilingual Word Embedding Initialization Unlike supervised NMT (Bahdanau et al., 2015; Chen et al., 2017a,b, 2018a; Vaswani et al., 2017), there are no bilingual supervised signals in UNMT. Fortunately, UBWE (Zhang et al., 2017; Artetxe et al., 2018a; Conneau et al., 2018) successfully learned translation equivalences between word pairs from two monolingual corpora. Typically, UBWE initializes the embedding of the vocabulary for the encoder and decoder of UNMT. The pre-trained UBWE provides naive translation knowledge to enable the back-translation to generate pseudo-supervised bilingual signals (Artetxe et al., 2018c; Lample et al., 2018a). The embeddings of the encoder and decoder change independently during the UNMT training process. 2.2 Denoising Auto-encoder The auto-encoder is difficult to learn useful knowledge for UNMT without some constraints. Otherwise, it would become a copying task that learned to copy the input words one by one (Lample et al., 2018a). To alleviate this problem, we utilize the same strategy of denoising autoencoder (Vincent et al., 2010), and noise in the form of random token swaps is introduced in this input sentence to improve the model learning ability (Hill et al., 2016; He et al., 2016). The denoising auto-encoder, which encodes a noisy version and reconstructs it with the decoder in the same language, is optimized by minimizing the objective function: Lauto = EX∼φL1[−logPL1→L1(X|C(X)] + EY ∼φL2[−logPL2→L2(Y |C(Y )], (2) where C(X) and C(Y ) are noisy versions of sentences X and Y , PL1→L1 (PL2→L2) denotes the reconstruction probability in the language L1 (L2). 2.3 Back-translation The denoising auto-encoder acts as a language model that has been trained in one language and does not consider the final goal of translating between two languages. Therefore, backtranslation (Sennrich et al., 2016) was adapted to train translation systems in a true translation setting based on monolingual corpora. Formally, given the sentences X and Y , the sentences YP (X) and XP (Y ) would be produced by the model at the previous iteration. The pseudo-parallel sentence pair (YP (X), X) and (XP (Y ), Y ) would be obtained to train the new translation model. Finally, the back-translation process is optimized by minimizing the following objective function: Lbt = EX∼φL1[−logPL2→L1(X|YP (X)] + EY ∼φL2[−logPL1→L2(Y |XP (Y )], (3) where PL1→L2 (PL2→L1) denotes the translation probability across two languages. 3 Preliminary Experiments To investigate the relationship between UBWE and UNMT, we empirically choose one similar language pair (English-French which are in the same language family) and one distant language pair (English-Japanese which are in the different language families) as the corpora. The detailed experimental settings for UBWE and UNMT are given in Section 5. 3.1 Effect of UBWE Quality on UNMT Performance Figure 1 shows the UNMT performance using UBWE with different levels of accuracy. To obtain UBWE with different accuracy levels, we used the VecMap (Artetxe et al., 2018a) embedding at different checkpoints to pre-train UNMT.1 1Accuracy “0” indicates only monolingual embeddings were used on each language before VecMap training started. 1237 0 5 20 30 45 60 70 0 10 20 UBWE Precision@1 BLEU score Fr-En En-Fr Ja-En En-Ja Figure 1: UNMT performance using UBWE with different levels of accuracy. Precision@1 indicates the accuracy of word translation using the top-1 predicted candidate in the MUSE test set2. As the UBWE accuracy increased, the NMT performance of both language pairs increased. This indicates that the quality of pre-trained UBWE is important for UNMT. 3.2 Trend of UBWE Quality during UNMT Training Figure 2 shows the trend in UBWE accuracy and BLEU score as UNMT proceeds through the training stage. VecMap was used to pre-train the word embedding for the encoder and decoder of UNMT. We used source embedding of encoder and target embedding of decoder to calculate the word translation accuracy on the MUSE test set during UNMT training. 0 20 40 60 80 100 120 140 20 40 60 80 Epoch UBWE Precision@1 Fr enc-En dec Ja enc-En dec 10 15 20 25 BLEU Fr-En UBWE accuracy Ja-En UBWE accuracy Fr-En BLEU Ja-En BLEU Figure 2: UBWE accuracy and BLEU score over the course of UNMT training. 2https://github.com/facebookresearch/ MUSE Regardless of the language, the UBWE performance decreased significantly over the course of UNMT training, as shown in Figure 2. 3.3 Analysis The empirical results in this section show that the quality of pre-trained UBWE is important to UNMT. However, the quality of UBWE decreases significantly during UNMT training. We hypothesize that maintaining the quality of UBWE may enhance the performance of UNMT. In this subsection, we analyze some possible solutions to this issue. Use fixed embedding? As Figure 2 shows, the UBWE performance decreases significantly during the UNMT training process. Therefore, we try to fix the embedding of the encoder and decoder on the basis of the original baseline system (Baseline-fix). Table 1 shows that the performance of the Baseline-fix system is quite similar to that of the original baseline system. In other words, Baseline-fix prevents the degradation of UBWE accuracy; however, the fixed embedding also prevents UBWE from further improving UNMT training. Therefore, the fixed UBWE does not enhance the performance of UNMT. Methods Fr-En En-Fr Ja-En En-Ja Baseline 24.50 25.37 14.09 21.63 Baseline-fix 24.22 25.26 13.88 21.93 Table 1: Results of UNMT Use byte pair encoding (BPE) to increase shared subwords? For English-French and English-German UNMT, Lample et al. (2018b) concatenated two bilingual corpora into a single monolingual corpus. They adopted BPE to enlarge the number of shared subwords in the two languages. The pre-trained monolingual subword embedding was used as the initialization for UNMT. Because there are many shared subwords in these similar language pairs, this method achieves better performance than other UBWE methods. However, this initialization does not work for distant language pairs such as English-Japanese and English-Chinese, where there are few shared subwords. Using wordbased embedding in UNMT is more universal. In addition, word-based embedding are easy to combine with UBWE technology. Therefore, we do not adopt BPE in the proposed method. 1238 C noise X X Y Y C(X) YP(X) C(Y) XP(Y) X Y Y X 𝓛𝒂𝒖𝒕𝒐 L1encoder L1decoder L1encoder L2decoder L2decoder L2encoder L2encoder L1decoder 𝓛𝒃𝒕 𝓛𝒃𝒕 𝓛𝒂𝒖𝒕𝒐 EMB layer EMB layer EMB layer EMB layer 𝓛𝒂𝒈𝒓𝒆𝒆𝒎𝒆𝒏𝒕 𝓛𝒂𝒈𝒓𝒆𝒆𝒎𝒆𝒏𝒕 (y1, x1) (y2, x2) …… (y|Dict|, x |Dict|) (x1,y1) (x2,y2) …… (x|Dict|,y|Dict|) M Previous model M Previous model C noise (a) C noise C noise X X Y Y C(X) YP(X) C(Y) XP(Y) X Y Y X ency encx W2ency decx W1encx decy 𝓛𝒂𝒖𝒕𝒐 G2 D2 G1 D1 L1encoder L1decoder L1encoder L2decoder L2decoder L2encoder L2encoder L1decoder 𝓛𝒃𝒕 𝓛𝒃𝒕 𝓛𝒂𝒖𝒕𝒐 𝓛𝒂𝒅𝒗𝟐 𝓛𝒂𝒅𝒗𝟏 EMB layer EMB layer EMB layer EMB layer M Previous model M Previous model (b) Figure 3: (a) Architecture of UNMT with UBWE Agreement Regularization; (b) Architecture of UNMT with UBWE Adversarial Training. 4 Train UNMT with UBWE Agreement Based on previous empirical findings and analyses, we propose two joint agreement mechanisms, i.e., UBWE agreement regularization and UBWE adversarial training, that enable UBWE and UNMT to interact during the training process, resulting in improved translation performance. Figure 3 illustrates the architecture of UNMT and the proposed agreement mechanisms. Generally, during UNMT training, an objective function LBWE is added to ensure UBWE agreement. The general UNMT objective function can be reformulated as follows: Lall = Lauto + Lbt + λLBWE. (4) 4.1 UBWE Agreement Regularization On the basis of the existing architecture of UNMT, we induce UBWE agreement regularization during back-translation to maintain the UBWE accuracy in the encoder and the decoder during UNMT training. The similarity function Similarity(L1, L2) of the encoder and decoder embeddings is used to measure the UBWE accuracy and the objective function LBWE is LBWE ≜Lagreement = Similarity(L1, L2) = Similarity(encL1, decL2) + Similarity(encL2, decL1) (5) where encL1 and encL2 denote all word embeddings of encoders L1 and L2, respectively, decL1 and decL2 denote all word embeddings of decoders L1 and L2, respectively. As there is no test or development data set that can be employed as a bilingual dictionary in UNMT, before computing Similarity(L1, L2), we need to generate a synthetic word-pair dictionary to measure the UBWE accuracy during NMT training. Motivated by Conneau et al. (2018), we use the cross-domain similarity local scaling (CSLS) to measure the UBWE accuracy. This can also be viewed as the similarity between the source word embedding and the target word embedding. CSLS(xi, yi) = 2 · cos(encxi, decyi) −r(xi) −r(yi), (6) r(xi) = 1 K X y∈N(xi) cos(encxi, decy), (7) r(yi) = 1 K X x∈N(yi) cos(encx, decyi), (8) where y ∈ N(xi) denotes the K nearest neighborhood of the source word xi, and similarly for x ∈N(yi). encxi denotes the embedding of word xi in encoder L1 and decyi denotes the word embedding of yi in decoder L2. As the size of the entire vocabulary is large, we select a subset as the synthetic word-pair dictionary. By ranking the CSLS, we can select the most accurate word pairs {xi, yi} as the synthetic dictionary Dictx−>y. The opposite word pairs Dicty−>x = {yj, xj} could be obtained by the 1239 same method. encyj denotes the embedding of word yj by encoder L2 and decxj denotes the embedding of word xj by decoder L1. Both dictionary sizes are set to |Dict|. Therefore, the similarity between the word embeddings in the encoder and decoder is measured as Similarity(encL1, decL2) ≈ 1 |Dict| |Dict| X i (1 −cos(encxi, decyi)). (9) Similarity(encL2, decL1) ≈ 1 |Dict| |Dict| X j (1 −cos(encyj, decxj)). (10) The above similarity between word pairs in Dict is used for UBWE agreement regularization during back-translation. Note that the synthetic word-pair dictionary is dynamically selected in each epoch of UNMT training. 4.2 UBWE Adversarial Training In UBWE, there is a transformation matrix to project the source word embedding to the target word embedding. Motivated by Conneau et al. (2018), we induce a transformation matrix using an adversarial approach . The generator is estimated as: G1 = W1encx, (11) where encx is the L1 the encoder word embedding, decy is the corresponding L2 decoder word embedding, and W1 is the transformation matrix that project the embedding space of encx onto that of decy. The discriminator D1 is a multilayer perceptron representing the probability that the word embedding comes from this language. It is trained to discriminate the language to which the word embedding between W1encx and decy belongs. W1 is trained to confuse the discriminator D1 by making W1encx and decy increasingly similar. In other words, we train D1 to maximize the probability of choosing the accurate language between the original word embedding and samples from G1. The generator G1 is trained to minimize log(1−D1(G1(encx))). Thus, the two-player minimax game (Goodfellow et al., 2014) with value function V (G1, D1) is optimized as: min G1 max D1 V (D1, G1) = Edecy[log D1(decy)] + Eencx[log(1 −D1(G1(encx)))]. (12) D2 and G2 are similar to D1 and G1. The objective functions for the discriminator D1 and generator G1 can be written as: LD1 = Eencx[−log(1 −D1(G1))] + Edecy[−log(D1(decy)], (13) LG1 = Eencx[−log(D1(G1))] + Edecy[−log(1 −D1(decy)]. (14) LD2 and LG2 are similar to LD1 and LG1. After inducing UBWE adversarial training into UNMT, the LBWE objective function is minimized as LBWE ≜Ladv = Ladv1 + Ladv2, (15) where Ladv1 = LG1 + LD1 and Ladv2 = LG2 + LD2. The proposed LBWE (Lagreement or Ladv) is added to the Lall in Eq. 4 during back-translation of UNMT training as shown in Figure 3. 5 Experiments 5.1 Datasets The proposed methods were evaluated on three language pairs: French-English (Fr-En), GermanEnglish (De-En), and Japanese-English (Ja-En). Fr-En and De-En are similar European language pairs. We used 30 million sentences from the WMT monolingual News Crawl datasets from 2007 to 2013. Ja-En is a distant languages pair and so UBWE training is much more difficult than for similar European language pairs (Søgaard et al., 2018). In addition, Japanese and English are different language families and their word orderings are quite different. As a result, the performance of Ja-En UNMT is too poor to further empirical study if only pure monolingual data are used. Therefore, we constructed simulated experiments using shuffled parallel sentences, i.e., 3.0M sentence pairs from the ASPEC corpus for Ja-En. We reported the results on WMT newstest2014 for Fr-En, WMT newstest2016 for De-En, and WAT-2018 ASPEC testset for Ja-En. 5.2 UBWE Settings For UBWE training, we first used the monolingual corpora described above to train 1240 0 20 40 60 80 100 120 140 40 60 80 Epoch UBWE Precision@1 Base-Fr enc-En dec AR-Fr enc-En dec AT-Fr enc-En dec 22 24 26 BLEU Base-Fr-En UBWE accuracy AR-Fr-En UBWE accuracy AT-Fr-En UBWE accuracy Base-Fr-En BLEU AR-Fr-En BLEU AT-Fr-En BLEU (a) Fr-En 0 20 40 60 80 100 120 140 20 40 60 Epoch UBWE Precision@1 Base-Ja enc-En dec AR-Ja enc-En dec AT-Ja enc-En dec 10 15 BLEU Base-Ja-En UBWE accuracy AR-Ja-En UBWE accuracy AT-Ja-En UBWE accuracy Base-Ja-En BLEU AR-Ja-En BLEU AT-Ja-En BLEU (b) Ja-En Figure 4: The trends of UBWE quality and BLEU score for baseline (Base), UBWE agreement regularization (AR), and UBWE adversarial training (AT) during UNMT training on the Fr-En and Ja-En dataset Methods De-En En-De Fr-En En-Fr Ja-En En-Ja Artetxe et al. (2018c) n/a n/a 15.56 15.13 n/a n/a Lample et al. (2018a) 13.33 9.64 14.31 15.05 n/a n/a Yang et al. (2018) 14.62 10.86 15.58 16.97 n/a n/a Lample et al. (2018b) 21.0 17.2 24.2 25.1 n/a n/a UNMT Baseline 21.23 17.06 24.50 25.37 14.09 21.63 + UBWE agreement regularization 22.38++ 18.04++ 25.21++ 27.86++ 16.36++ 23.01++ + UBWE adversarial training 22.67++ 18.29++ 25.87++ 28.38++ 17.22++ 23.64++ Table 2: Performance (BLEU score) of UNMT. “++” after a score indicates that the proposed method was significantly better than the UNMT baseline at significance level p <0.01. the embeddings for each language independently with fastText3(Bojanowski et al., 2017) (default settings). The word embeddings were normalized by length and mean centered before bilingual projection. We then used VecMap4(Artetxe et al., 2018a) (default settings) to project two monolingual word embeddings into one space. To evaluate the quality of UBWE, we selected the accuracy of word translation using the top-1 predicted candidate in the MUSE test set as the criterion. 5.3 UNMT Settings In the training process for UNMT, we used the transformer-based UNMT toolkit5 and the settings of Lample et al. (2018b). That is, we used four 3https://github.com/facebookresearch/ fastText 4https://github.com/artetxem/vecmap 5https://github.com/facebookresearch/ UnsupervisedMT layers in both the encoder and the decoder. Three out of the four encoder and decoder layers were shared between the source and target languages. The dimension of the hidden layers was set to 512. Training used a batch-size of 32 and the Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of 0.0001, β1 = 0.5. The vocabulary size was set to 60k by concatenating the source and target corpora. We performed 140 epochs6 (approximately 500K iterations) to train every model. The case-sensitive BLEU score computed with the multi −bleu.perl script from Moses7 was used as the evaluation metric. For model selection, we followed the strategy described by Lample et al. (2018a). That is, the BLEU score computed between the original source sentences 6The definition of epoch in UNMT is different from that in NMT. We followed the settings in Lample et al. (2018b)’s toolkit, i.e., 3500 iterations as one epoch. 7https://github.com/moses-smt/ mosesdecoder 1241 and their reconstructions was used as the criterion. We selected the model that had the highest average BLEU score over the two translation directions. For the proposed methods, both UBWE agreement regularization and UBWE adversarial training were added as objective functions at the beginning of UNMT training. The detailed parameter settings are discussed in Section 6. 5.4 Performance Figure 4 shows the trend in UBWE quality and BLEU score during UNMT training on Fr-En and Ja-En. Our observations are as follows: 1) For all systems, the UBWE accuracy decreases during UNMT training. This is consistent with our finding in the preliminary experiments. 2) For the system with UBWE agreement regularization and UBWE adversarial training, UBWE accuracy decreased much more slowly than in the original baseline system. This indicates that the proposed methods effectively mitigated the degradation of UBWE accuracy. 3) Regarding the two proposed methods, UBWE agreement regularization was better at mitigating the degradation of UBWE accuracy than UBWE adversarial training. Table 2 presents the detailed BLEU scores of the UNMT systems on the De-En, Fr-En, and JaEn test sets. Our observations are as follows: 1) Our re-implemented baseline performed similarly with the state-of-the-art method of Lample et al. (2018b). This indicates that the baseline is a strong system. 2) The proposed methods significantly outperformed the corresponding baseline in all the language pairs by 1∼3 BLEU scores. 3) Regarding the two proposed methods, UBWE adversarial training performed slightly better than UBWE agreement regularization by BLEU score, although UBWE agreement regularization was better at maintaining UBWE accuracy. The reason may be that agreement regularization is just added to the training objective of UNMT. In comparison, UBWE adversarial training is jointly trained with UNMT, thus has more interaction with UNMT model. 6 Discussion We now analyze the effect of the hyperparameters. There are two primary factors that affect the performances of the proposed methods: the synthetic word-pair dictionary size for UBWE agreement regularization and λ for UBWE adversarial training. 6.1 Effect of Dictionary Size We first evaluated the impact of the synthetic word-pair dictionary size |Dict| during UBWE agreement regularization training on the Fr-En task. As indicated by Table 3, almost all models with different dictionary sizes outperformed the baseline system. This indicates that the proposed method is robust. Dict Size Fr-En En-Fr BLEU BLEU Baseline 24.50 25.37 20K 25.15 27.18 10K 25.10 27.48 5K 25.14 27.58 3K 25.21 27.86 1K 25.25 27.40 500 25.13 27.07 Table 3: Effect on Dictionary Size We also investigated the relationship between dictionary size and UBWE accuracy. As shown in Fig. 5, a larger dictionary size results in a slower decrease in UBWE accuracy. This indicates that a larger dictionary size helps estimate a better UBWE agreement. However, larger dictionary size did not always obtain a higher BLEU as shown in Table 3. The model with a dictionary size of 3000 achieved the best performance. 0 20 40 60 80 100 120 140 40 60 80 Epoch UBWE Precision@1 Baseline 20K 10K 5K 3K 1K 500 Figure 5: UBWE accuracy with respect to dictionary size on the Fr-En test set during UNMT training. 1242 0.010.1 0.3 0.5 0.8 1 5 10 100 24 26 28 λ value BLEU score Base-En-Fr Base-Fr-En AT-En-Fr AT-Fr-En Figure 6: Effect of Hyper-parameter λ for UBWE adversarial training (AT) model on the En ↔Fr dataset. 6.2 Effect of Hyper-parameter λ In Figure 6, we empirically investigated how the hyper-parameter λ in Eq. (4) affects the UNMT performance on the Fr-En task. The selection of λ influences the role of the LBWE across the entire UNMT training process. Larger values of λ cause the LBWE to play a more important role than the back-translation and denoising loss terms. The smaller the value of λ, the less important are the LBWE. As the Fig. 6 shows, λ ranging from 0.01 to 10 nearly all enhanced UNMT performance and a balanced λ = 1 achieved the best performance. 6.3 Efficiency We now discuss the efficiency of our proposed methods. Table 4 indicates that UBWE agreement regularization does not increase the number of parameters. UBWE adversarial training adds very few parameters. The training speed of these methods is almost the same. In addition, the proposed methods do not affect the UNMT decoding. Thus, our proposed methods do not affect the speed of the model. Parameters Speed Baseline 120,141K 3784 UBWE agreement regularization 120,141K 3741 UBWE adversarial training 120,764K 3733 Table 4: Analysis on parameters and training speed (number of processed words per second on one P100). 7 Related Work The supervised BWE (Mikolov et al., 2013), which exploits similarities between the source language and the target language through a linear transformation matrix, serves as the basis for many NLP tasks, such as machine translation (Bahdanau et al., 2015; Vaswani et al., 2017; Chen et al., 2018b; Zhang and Zhao, 2019), dependency parsing (Zhang et al., 2016; Li et al., 2018), semantic role labeling (He et al., 2018; Li et al., 2019). However, the lack of a large wordpair dictionary poses a major practical problem for many language pairs. UBWE has attracted considerable attention. For example, Artetxe et al. (2017) proposed a self-learning framework to learn BWE with a 25-word dictionary, and Artetxe et al. (2018a) extended previous work without any word dictionary via fully unsupervised initialization. Zhang et al. (2017) and Conneau et al. (2018) proposed UBWE methods via generative adversarial network training. Recently, several UBWE methods (Conneau et al., 2018; Artetxe et al., 2018a) have been applied to UNMT (Artetxe et al., 2018c; Lample et al., 2018a). These rely solely on monolingual corpora in each language via UBWE initialization, denoising auto-encoder, and back-translation. A shared encoder was used to encode the source sentences and decode them from a shared latent space (Artetxe et al., 2018c; Lample et al., 2018a). The difference is that Lample et al. (2018a) used a single shared decoder and Artetxe et al. (2018c) leveraged two independent decoders for each language. Yang et al. (2018) used two independent encoders for each language with a weight-sharing mechanism to overcome the weakness of retaining the uniqueness and internal characteristics of each language. Lample et al. (2018b) achieved remarkable results in several similar languages such as English-French by concatenating two bilingual corpora as one monolingual corpus and using monolingual embedding pre-training in the initialization step. This initialization achieves better performance than other UBWE methods. However, it does not work in some distant language pairs such as English-Japanese. This is why we did not use this initialization process for UBWE in our method. In addition, an alternative unsupervised method based on statistical machine translation (SMT) was proposed (Lample et al., 2018b; Artetxe et al., 2018b). The unsupervised machine translation performance was improved through combining UNMT and unsupervised SMT (Marie and Fujita, 2018; Ren et al., 2019; Artetxe et al., 2019). More recently, Lample and Conneau (2019) achieved 1243 better UNMT performance through introducing the pretrained language model. Neural network based language model has been shown helpful in supervised machine translation (Wang et al., 2014; Wang et al., 2018; Marie et al., 2018). We think that the proposed agreement mechanism can work with the pretrained language model. 8 Conclusion UBWE is a fundamental component of UNMT. In previous methods, the pre-trained UBWE is only used to initialize the word embedding of UNMT. In this study, we found that the performance of UNMT is significantly affected by the quality of UBWE, not only in the initialization stage, but also during UNMT training. Based on this finding, we proposed two joint learning methods to train UNMT with UBWE agreement. Empirical results on several language pairs show that the proposed methods can mitigate the decrease in UBWE accuracy and significantly improve the performance of UNMT. Acknowledgments The corresponding authors are Rui Wang and Tiejun Zhao. This work was partially conducted under the program “Promotion of Global Communications Plan: Research, Development, and Social Demonstration of Multilingual Speech Translation Technology” of the Ministry of Internal Affairs and Communications (MIC), Japan. Rui Wang was partially supported by JSPS grant-in-aid for early-career scientists (19K20354): “Unsupervised Neural Machine Translation in Universal Scenarios” and NICT tenure-track researcher startup fund “Toward Intelligent Machine Translation”. References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2289–2294, Austin, Texas. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451– 462, Vancouver, Canada. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789– 798, Melbourne, Australia. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632–3642, Brussels, Belgium. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. An effective approach to unsupervised machine translation. CoRR, abs/1902.01313. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018c. Unsupervised neural machine translation. In Proceedings of the Sixth International Conference on Learning Representations, Vancouver, Canada. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135– 146. Hailong Cao, Tiejun Zhao, Shu Zhang, and Yao Meng. 2016. A distribution-based model to learn bilingual word embeddings. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1818–1827, Osaka, Japan. Kehai Chen, Rui Wang, Masao Utiyama, Lemao Liu, Akihiro Tamura, Eiichiro Sumita, and Tiejun Zhao. 2017a. Neural machine translation with source dependency representation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2846–2852, Copenhagen, Denmark. Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, and Tiejun Zhao. 2018a. Syntax-directed attention for neural machine translation. In AAAI Conference on Artificial Intelligence, pages 4792– 4798, New Orleans, Lousiana, USA. Kehai Chen, Tiejun Zhao, Muyun Yang, and Lemao Liu. 2017b. Translation prediction with source dependency-based context representation. In AAAI Conference on Artificial Intelligence, pages 3166– 3172, San Francisco, California, USA. Kehai Chen, Tiejun Zhao, Muyun Yang, Lemao Liu, Akihiro Tamura, Rui Wang, Maosao Utiyama, and Eiichro Sumita. 2018b. A neural approach to source 1244 dependence based context model for statistical machine translation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(2):266–280. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In Proceedings of the Sixth International Conference on Learning Representations, Vancouver, Canada. Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In Proceedings of the Third International Conference on Learning Representations, San Diego, California. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 462–471, Gothenburg, Sweden. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, Montreal, Quebec, Canada. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, pages 820–828, Barcelona, Spain. Shexia He, Zuchao Li, Hai Zhao, and Hongxiao Bai. 2018. Syntax for semantic role labeling, to be, or not to be. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2061–2071, Melbourne, Australia. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1367–1377, San Diego California, USA. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the Third International Conference on Learning Representations, San Diego, California. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. CoRR, abs/1901.07291. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In Proceedings of the Sixth International Conference on Learning Representations, Vancouver, Canada. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049, Brussels, Belgium. Zuchao Li, Jiaxun Cai, Shexia He, and Hai Zhao. 2018. Seq2seq dependency parsing. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3203–3214, Santa Fe, New Mexico, USA. Zuchao Li, Shexia He, Hai Zhao, Yiqing Zhang, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. 2019. Dependency or span, end-to-end uniform semantic role labeling. CoRR, abs/1901.05280. Ang Lu, Weiran Wang, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Deep multilingual correlation for improved word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 250–256, Denver, Colorado. Benjamin Marie and Atsushi Fujita. 2018. Unsupervised neural machine translation initialized by unsupervised statistical machine translation. CoRR, abs/1810.12703. Benjamin Marie, Rui Wang, Atsushi Fujita, Masao Utiyama, and Eiichiro Sumita. 2018. Nict’s neural and statistical machine translation systems for the wmt18 news translation task. In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, pages 453–459, Belgium, Brussels. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168. Shuo Ren, Zhirui Zhang, Shujie Liu, Ming Zhou, and Shuai Ma. 2019. Unsupervised neural machine translation with SMT as posterior regularization. CoRR, abs/1901.04112. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany. Samuel L. Smith, David H.P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of the Fifth International Conference on Learning Representations, Toulon, France. 1245 Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778–788, Melbourne, Australia. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11:3371–3408. Rui Wang, Masao Utiyama, Andrew Finch, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2018. Sentence selection and weighting for neural machine translation domain adaptation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(10):1727–1741. Rui Wang, Hai Zhao, Bao-Liang Lu, Masao Utiyama, and Eiichiro Sumita. 2014. Neural network based bilingual language model growing for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 189–195, Doha, Qatar. Rui Wang, Hai Zhao, Sabine Ploux, Bao-Liang Lu, and Masao Utiyama. 2016. A bilingual graph-based semantic model for statistical machine translation. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pages 2950–2956, New York, USA. Rui Wang, Hai Zhao, Sabine Ploux, Bao-Liang Lu, Masao Utiyama, and Eiichiro Sumita. 2018. Graphbased bilingual word embedding for statistical machine translation. ACM Trans. Asian & LowResource Lang. Inf. Process., 17(4):31:1–31:23. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011, Denver, Colorado. Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Unsupervised neural machine translation with weight sharing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 46–55, Melbourne, Australia. Huan Zhang and Hai Zhao. 2019. Minimum divergence vs. maximum margin: An empirical comparison on seq2seq models. In Proceedings of the Seventh International Conference on Learning Representations, New Orleans, USA. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, Canada. Zhisong Zhang, Hai Zhao, and Lianhui Qin. 2016. Probabilistic graph-based dependency parsing with convolutional neural network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1382–1392, Berlin, Germany.
2019
119
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 117–128 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 117 The (Non-)Utility of Structural Features in BiLSTM-based Dependency Parsers Agnieszka Falenska and Jonas Kuhn Institut f¨ur Maschinelle Sprachverarbeitung University of Stuttgart [email protected] Abstract Classical non-neural dependency parsers put considerable effort on the design of feature functions. Especially, they benefit from information coming from structural features, such as features drawn from neighboring tokens in the dependency tree. In contrast, their BiLSTM-based successors achieve state-ofthe-art performance without explicit information about the structural context. In this paper we aim to answer the question: How much structural context are the BiLSTM representations able to capture implicitly? We show that features drawn from partial subtrees become redundant when the BiLSTMs are used. We provide a deep insight into information flow in transition- and graph-based neural architectures to demonstrate where the implicit information comes from when the parsers make their decisions. Finally, with model ablations we demonstrate that the structural context is not only present in the models, but it significantly influences their performance. 1 Introduction When designing a conventional non-neural parser substantial effort is required to design a powerful feature extraction function. Such a function (McDonald et al., 2005; Zhang and Nivre, 2011, among others) is constructed so that it captures as much structural context as possible. The context allows the parser to make well-informed decisions.1 It is encoded in features built from partial subtrees and explicitly used by the models. Recently, Kiperwasser and Goldberg (2016, K&G) showed that the conventional feature extraction functions can be replaced by modeling the left- and right-context of each word with BiLSTMs (Hochreiter and Schmidhuber, 1997; 1See Figure 1 for the concept of structural context, details of the architectures will be described in Section 2. Graves and Schmidhuber, 2005). Although the proposed models do not use any conventional structural features they achieve state-of-the-art performance. The authors suggested that it is because the BiLSTM encoding is able to estimate the missing information from the given features and did not explore this issue further. Since the introduction of the K&G architecture BiLSTM-based parsers have become standard in the field.2 Yet, it is an open question how much conventional structural context the BiLSTMs representations actually are able to capture implicitly. Small architectures that ignore the structural context are attractive since they come with lower time complexity. But to build such architectures it is important to investigate to what extent the explicit structural information is redundant. For example, K&G also proposed an extended feature set derived from structural context, which has subsequently been re-implemented and used by others without questioning its utility. Inspired by recent work (Gaddy et al., 2018) on constituency parsing we aim at understanding what type of information is captured by the internal representations of BiLSTM-based dependency parsers and how it translates into their impressive accuracy. As our starting point we take the K&G architecture and extend it with a secondorder decoder.3 We perform systematic analyses on nine languages using two different architectures (transition-based and graph-based) across two dimensions: with and without BiLSTM representations, and with and without features drawn from structural context. 2See results from the recent CoNLL 2018 shared task on dependency parsing (Zeman et al., 2018) for a comparison of various high-performing dependency parsers. 3To the best of our knowledge, this is the first BiLSTMbased second-order dependency parser. G´omez-Rodr´ıguez et al. (2018) incorporate BiLSTM-based representations into the third-order 1-Endpoint-Crossing parser of Pitler (2014). 118 s1 s0L s0 s0R b0 x1 x2 ... xi ... xj xn x1 x2 ... xi ... xj xn scores: LAlbl RAlbl SHIFT [3] MLP [2] BiLSTM [1] word repr. structural context (a) Transition-based parser; scoring transitions for the configuration ⟨Σ, B, A⟩= ⟨x1 . . . xi, xn, {xi →x2, xi →xj, . . .}⟩ head (h) sibling (s) dependent (d) x1 x2 ... xi ... xj xn x1 x2 ... xi ... xj xn arc score (b) Graph-based parser; scoring an arc x1 →xj Figure 1: Schematic illustration of the K&G architecture of BiLSTM-based neural dependency parsers. Red arrows mark the basic feature sets and blue show how to extend them with features drawn from structural context. We demonstrate that structural features are useful for neural dependency parsers but they become redundant when BiLSTMs are used (Section 4). It is because the BiLSTM representations trained together with dependency parsers capture a significant amount of complex syntactic relations (Section 5.1). We then carry out an extensive investigation of information flow in the parsing architectures and find that the implicit structural context is not only present in the BiLSTM-based parsing models, but also more diverse than when encoded in explicit structural features (Section 5.2). Finally, we present results on ablated models to demonstrate the influence of structural information implicitly encoded in BiLSTM representations on the final parsing accuracy (Section 5.3). 2 Parsing Model Architecture Our graph- and transition-based parsers are based on the K&G architecture (see Figure 1). The architecture has subsequently been extended by, e.g., character-based embeddings (de Lhoneux et al., 2017) or attention (Dozat and Manning, 2016). To keep the experimental setup clean and simple while focusing on the information flow in the architecture, we abstain from these extensions. We use the basic K&G architecture as our starting point with a few minor changes outlined below. For further details we refer the reader to Kiperwasser and Goldberg (2016). 2.1 Word Representations In both transition- and graph-based architectures input tokens are represented in the same way (see level [1] in Figure 1). For a given sentence with words [w1, . . . wn] and part-of-speech (POS) tags [t1, . . . , tn] each word representation xi is built from concatenating the embeddings of the word and its POS tag: xi = e(wi) ◦e(ti) The embeddings are initialized randomly at training time and trained together with the model. The representations xi encode words in isolation and do not contain information about their context. For that reason they are passed to the BiLSTM feature extractors (level [2] in Figure 1) and represented by a BiLSTM representation xi: xi = BiLSTM(x1:n, i) 2.2 Transition-Based Parser Transition-based parsers gradually build a tree by applying a sequence of transitions. During training they learn a scoring function for transitions. While decoding they search for the best action given the current state and the parsing history. Figure 1a illustrates the architecture of the transition-based K&G parser. For every configuration c consisting of a stack, buffer, and a set of arcs introduced so far, the parser selects a few core items from the stack and buffer (red arrows in the figure) as features. Next, it concatenates their BiLSTM vectors and passes them to a multi-layer perceptron (MLP) which assigns scores to all possible transitions. The highest scoring transition is used to proceed to the next configuration. Our implementation (denoted TBPARS) uses the arc-standard decoding algorithm (Nivre, 2004) extended with a SWAP transition (ASWAP, Nivre (2009)) to handle non-projective trees. The system applies arc transitions between the two topmost items of the stack (denoted s0 and s1). We use the lazy SWAP oracle by Nivre et al. (2009) for training. Labels are predicted together with the transitions. We experiment with two models with different feature sets: 119 TBMIN: is the simple architecture which does not use structural features. Since Shi et al. (2017) showed that the feature set { s0, s1, b0} is minimal for the arc-standard system (i.e., it suffers almost no loss in performance in comparison to larger feature sets but significantly out-performs a feature set built from only two vectors) we apply the same feature set to ASWAP. Later we analyze if the set could be further reduced. TBEXT: is the extended architecture. We use the original extended feature set from K&G: { s0, s1, s2, b0, s0L, s0R, s1L, s1R, s2L, s2R, b0L}, where .L and .R denote left- and right-most child. 2.3 Graph-Based Parser The K&G graph-based parser follows the structured prediction paradigm: while training it learns a scoring function which scores the correct tree higher than all the other possible ones. While decoding it searches for the highest scoring tree for a given sentence. The parser employs an arcfactored approach (McDonald et al., 2005), i.e., it decomposes the score of a tree to the sum of the scores of its arcs. Figure 1b shows the K&G graph-based architecture. At parsing time, every pair of words ⟨xi, xj⟩yields a BiLSTM representation { xi, xj} (red arrows in the figure) which is passed to MLP to compute the score for an arc xi →xj. To find the highest scoring tree we apply Eisner (1996)’s algorithm. We denote this architecture GBMIN. We note in passing that, although this decoding algorithm is restricted to projective trees, it has the advantage that it can be extended to incorporate non-local features while still maintaining exact search in polynomial time.4 The above-mentioned simple architecture uses a feature set of two vectors { h , d }. We extend it and add information about structural context. Specifically, we incorporate information about siblings s (blue arrows in the figure). The model follows the second-order model from McDonald and Pereira (2006) and decomposes the score of the tree into the sum of adjacent edge pair scores. We use the implementation of the secondorder decoder from Zhang and Zhao (2015). We denote this architecture GBSIBL. 4Replacing Eisner (1996)’s algorithm with the Chu-LiuEdmonds’s decoder (Chu and Liu, 1965; Edmonds, 1967) which can predict non-projective arcs causes significant improvements only for the Ancient Greek treebank (1.02 LAS on test set). 3 Experimental Setup Data sets and preprocessing. We perform experiments on a selection of nine treebanks from Universal Dependencies (Nivre et al., 2016) (v2.0): Ancient Greek PROIEL (grc), Arabic (ar), Chinese (zh), English (en), Finnish (fi), Hebrew (he), Korean (ko), Russian (ru) and Swedish (sv). This selection was proposed by Smith et al. (2018) as a sample of languages varying in language family, morphological complexity, and frequencies of non-projectivity (we refer to Smith et al. (2018) for treebank statistics). To these 9, we add the English Penn Treebank (en-ptb) converted to Stanford Dependencies.5 We use sections 2-21 for training, 24 as development set and 23 as test set. We use automatically predicted universal POS tags in all the experiments. The tags are assigned using a CRF tagger (Mueller et al., 2013). We annotate the training sets via 5-fold jackknifing. Evaluation. We evaluate the experiments using Labeled Attachment Score (LAS).6 We train models for 30 epochs and select the best model based on development LAS. We follow recommendations from Reimers and Gurevych (2018) and report averages and standard deviations from six models trained with different random seeds. We test for significance using the Wilcoxon rank-sum test with p-value < 0.05. Analysis is carried out on the development sets in order not to compromise the test sets. We present the results on the concatenation of all the development sets (one model per language). While the absolute numbers vary across languages, the general trends are consistent with the concatenation. Implementation details. All the described parsers were implemented with the DyNet library (Neubig et al., 2017).7 We use the same hyperparameters as Kiperwasser and Goldberg (2016) and summarize them in Table 2 in Appendix A. 5We use version 3.4.1 of the Stanford Parser from http://nlp.stanford.edu/software/ lex-parser.shtml 6The ratio of tokens with a correct head and label to the total number of tokens in the test data. 7The code can be found on the first author’s website. 120 avg. en-ptb ar en fi grc he ko ru sv zh TBMIN 76.43 90.25 76.22 81.85† 72.51† 71.92† 79.41† 64.39 74.35† 80.11† 73.28† TBEXT 75.56 90.25 75.77 80.50 71.47 70.32 78.62 63.88 73.82 78.80 72.17 GBMIN 77.74 91.40 77.25 82.53 74.37 73.48 80.83 65.47 76.43 81.22 74.47 GBSIBL 77.89 91.59 77.21 82.65 74.44 73.20 81.03 65.61 76.79† 81.42 74.95† Table 1: Average (from six runs) parsing results (LAS) on test sets. † marks statistical significance (p-value < 0.05). Corresponding standard deviations are provided in Table 3 in Appendix A. 4 Structural Features and BiLSTMs 4.1 Simple vs. Extended Architectures We start by evaluating the performance of our four models. The purpose is to verify that the simple architectures will compensate for the lack of additional structural features and achieve comparable accuracy to the extended ones. Table 1 shows the accuracy of all the parsers. Comparing the simple and extended architectures we see that dropping the structural features does not hurt the performance, neither for transitionbased nor graph-based parsers. Figure 2 displays the accuracy relative to dependency length in terms of recall.8 It shows that the differences between models are not restricted to arcs of particular lengths. In the case of graph-based models (GBMIN vs. GBSIBL) adding the second-order features to a BiLSTM-based parser improves the average performance slightly. However, the difference between those two models is significant only for two out of ten treebanks. For the transition-based parser (TBMIN vs. TBEXT) a different effect can be noticed – additional features cause a significant loss in accuracy for seven out of ten treebanks. One possible explanation might be that TBEXT suffers more from error propagation than TBMIN. The parser is greedy and after making the first mistake it starts drawing features from configurations which were not observed during training. Since the extended architecture uses more features than the simple one the impact of the error propagation might be stronger. This effect can be noticed in Figure 2. The curves for TBMIN and TBEXT are almost parallel for the short arcs but the performance of TBEXT deteriorates for the longer ones, which are more prone to error propagation (McDonald and Nivre, 2007). 8Dependency recall is defined as the percentage of correct predictions among gold standard arcs of length l (McDonald and Nivre, 2007). 1 2 3 4 5 6 7 8 9 10 11 12 13 1415+ Dependency length 0 20k 40k 60k 80k 100k Bin size 0.5 0.6 0.7 0.8 0.9 Recall GbMin GbSibl TbMin TbExt Figure 2: Dependency recall relative to arc length on development sets. The corresponding plot for precision shows similar trends (see Figure 7 in Appendix A). 4.2 Influence of BiLSTMs We now investigate whether BiLSTMs are the reason for models being able to compensate for lack of features drawn from partial subtrees. Transition-based parser. We train TBPARS in two settings: with and without BiLSTMs (when no BiLSTMs are used we pass vectors xi directly to the MLP layer following Chen and Manning (2014)) and with different feature sets. We start with a feature set {s0} and consecutively add more until we reach the full feature model of TBEXT. Figure 3a displays the accuracy of all the trained models. First of all, we notice that the models without BiLSTMs (light bars) benefit from structural features. The biggest gains in the average performance are visible after adding vectors s0L (5.15 LAS) and s1R (1.12 LAS) . After adding s1R the average improvements become modest. Adding the BiLSTM representations changes the picture (dark bars). First of all, as in the case of arc-standard system (Shi et al., 2017), the feature set { s0, s1, b0} is minimal for ASWAP: none of the other structural features are able to improve the performance of the parser but dropping b0 causes a big drop of almost 6 LAS on average. Secondly, the parsers which use BiLSTMs always have a big 121 s0 s1 b0 s2 s0L s0R s1L s1R s2L s2R b0L 0 20 40 60 80 LAS TbPars +BiLSTM (a) Transition-based parser h,d(,s) dist h±1, d±1 h±2, d±2 0 20 40 60 80 LAS GbMin +BiLSTM GbSibl +BiLSTM (b) Graph-based parser Figure 3: Parsing accuracy (average LAS over ten treebanks) with incremental extensions to the feature set. advantage over the parsers which do not, regardless of the feature model used. Graph-based parser. We train two models: GBMIN and GBSIBL with and without BiLSTMs. To ensure a fairer comparison with the models without BiLSTMs we expand the basic feature sets ({ h , d } and { h , d , s }) with additional surface features known from classic graph-based parsers, such as distance between head and dependent (dist), words at distance of 1 from heads and dependents (h±1, d±1) and at distance ±2. We follow Wang and Chang (2016) and encode distance as randomly initialized embeddings. Figure 3b displays the accuracy of all the trained models with incremental extensions to their feature sets. First of all, we see that surface features (dist, h±1, d±1, h±2, d±2) are beneficial for the models without BiLSTM representations (light bars). The improvements are visible for both parsers, with the smallest gains after adding h±2, d±2 vectors: on average 0.35 LAS for GBSIBL and 0.83 LAS for GBMIN. As expected, adding BiLSTMs changes the picture. Since the representations capture surface context, they already contain a lot of information about words around heads and dependents and adding features h±1, d±1 and h±2, d±2 does not influence the performance. Interestingly, introducing dist is also redundant which suggests that either BiLSTMs are aware of the distance between tokens or they are not able to use this information in a meaningful way. Finally, even after adding all the surface features the models which do not employ BiLSTMs are considerably behind the ones which do. Comparing GBMIN (blue) with GBSIBL (red) we see that adding information about structural context through second-order features is beneficial when the BiLSTM are not used (light bars): the second-order GBSIBL has an advantage over GBMIN of 0.81 LAS even when both of the models use all the additional surface information (last group of bars on the plot). But this advantage drops down to insignificant 0.07 LAS when the BiLSTMs are incorporated. We conclude that, for both transition- and graph-based parsers, BiLSTMs not only compensate for absence of structural features but they also encode more information than provided by the manually designed feature sets. 5 Implicit Structural Context Now that we have established that structural features are indeed redundant for models which employ BiLSTMs we examine the ways in which the simple parsing models (TBMIN and GBMIN) implicitly encode information about partial subtrees. 5.1 Structure and BiLSTM Representations We start by looking at the BiLSTM representations. We know that the representations are capable of capturing syntactic relations when they are trained on a syntactically related task, e.g, number prediction task (Linzen et al., 2016). We evaluate how complicated those relations can be when the representations are trained together with a dependency parser. To do so, we follow Gaddy et al. (2018) and use derivatives to estimate how sensitive a particular part of the architecture is with respect to changes in input. Specifically, for every vector x we measure how it is influenced by every word represen122 0 5 10 15 20 Distance 0 5 10 15 20 Average impact Other Head Grand Child Sibl (a) Transition-based parser (TBMIN) 0 5 10 15 20 Distance 0 5 10 15 20 Average impact Other Head Grand Child Sibl (b) Graph-based parser (GBMIN) Figure 4: The average impact of tokens on BiLSTM vectors trained with dependency parser with respect to the surface distance and the structural (gold-standard) relation between them. tation xi from the sentence. If the derivative of x with respect to xi is high then the word xi has a high influence on the vector. We compute the l2norm of the gradient of x with respect to xi and normalize it by the sum of norms of all the words from the sentence calling this measure impact: impact( x , i) = 100 × || ∂ x ∂xi || P j || ∂ x ∂xj || For every sentence from the development set and every vector xi we calculate the impact of every representation xj from the sentence on the vector xi. We bucket those impact values according to the distance between the representation and the word. We then use the gold-standard trees to divide every bucket into five groups: correct heads of xi, children (i.e., dependents) of xi, grandparents (i.e., heads of heads), siblings, and other. Figure 4 shows the average impact of tokens at particular positions. Similarly as shown by Gaddy et al. (2018) even words 15 and more positions away have a non-zero effect on the BiLSTM vector. Interestingly, the impact of words which we know to be structurally close to xi is higher. For example, for the transition-based parser (Figure 4a) at positions ±5 an average impact is lower than 2.5%, children and siblings of xi have a slightly higher impact, and the heads and grandparents around 5%. For the graph-based parser (Figure 4b) the picture is similar with two noticeable differences. The impact of heads is much stronger for words 10 and more positions apart. But it is smaller than in the case of transition-based parser when the heads are next to xi. We conclude that the BiLSTMs are indeed influenced by the distance, but when trained with a dependency parser they also capture a significant amount of non-trivial syntactic relations. 5.2 Structure and Information Flow Now that we know that the representations encode structural information we ask how this information influences the decisions of the parser. First, we investigate how much structural information flows into the final layer of the network. When we look back at the architecture in Figure 1 we see that when the final MLP scores possible transitions or arcs it uses only feature vectors { s0, s1, b0} or { h , d }. But thanks to the BiLSTMs the vectors encode information about other words from the sentence. We examine from which words the signal is the strongest when the parser makes the final decision. We extend the definition of impact to capture how a specific word representation xi influences the final MLP score sc (we calculate the derivative of sc with respect to xi). We parse every development sentence. For every predicted transition/arc we calculate how much its score sc was affected by every word from the sentence. We group impacts of words depending on their positions. Transition-based parser. For the transitionbased parser we group tokens according to their positions in the configuration. For example, for the decision in Figure 1a impact(sc, 1) would be grouped as s1 and impact(sc, j) as s0R. In Figure 5a we plot the 15 positions with the highest impact and the number of configurations they appear in (gray bars). As expected, s0, s1, and 123 s0 s1 b0 s1R s0L b1 s0R s0L s0R s2R s1R s1L s1L b2 s2 0 100k 200k 300k 400k 0 5 10 15 20 25 30 Average impact (a) Transition-based parser (TBMIN); positions depend on the configuration; .L marks left children that are not the leftmost, .R marks right children that are not the rightmost. h d c d±1 s h±1 d±2 g h±2 d±3 0 50k 100k 150k 200k Bin size 0 5 10 15 20 25 30 (b) Graph-based parser (GBMIN); positions are: heads (h), dependents (d), children of d (c), siblings (s), grandparents (g), h,d±i tokens at distance ±i from h or d which are none of h, d, c, s, or g. Figure 5: Positions with the highest impact on the MLP scores (blue crosses) and their frequency (gray bars). b0 have the highest influence on the decision of the parser. The next two positions are s1R and s0L. Interestingly, those are the same positions which used as features caused the biggest gains in performance for the models which did not use BiLSTMs (see Figure 3a). They are much less frequent than b1 but when they are present the model is strongly influenced by them. After b1 we can notice positions which are not part of the manually designed extended feature set of TBEXT, such as s0L (left children of s0 that are not the leftmost). Graph-based parser. For the graph-based parser we group tokens according to their position in the full predicted tree. We then bucket the impacts into: heads (h), dependents (d), children (i.e., dependents of dependents) (c), siblings (s), and grandparents (i.e., heads of heads) (g). Words which do not fall into any of those categories are grouped according to their surface distance from heads and dependents. For example, h±2 are tokens two positions away from the head which do not act as dependent, child, sibling, or grandparent. Figure 5b presents 10 positions with the highest impact and the number of arcs for which they are present (gray bars). As expected, heads and dependents have the highest impact on the scores of arcs, much higher than any of the other tokens. Interestingly, among the next three bins with the highest impact are children and siblings. Children are less frequent than structurally unrelated tokens at distance 1 (h±1, d±1), and much less frequent than h±2 or d±2 but they influence the final scores more. The interesting case is siblings – they not only have a strong average impact but they are also very frequent, suggesting that they are very important for the parsing accuracy. The results above show that the implicit structural context is not only present in the models, but also more diverse than when incorporated through conventional explicit structural features. 5.3 Structure and Performance Finally, we investigate if the implicit structural context is important for the performance of the parsers. To do so, we take tokens at structural positions with the highest impact and train new ablated models in which the information about those tokens is dropped from the BiLSTM layer. For example, while training an ablated model without s0L, for every configuration we re-calculate all the BiLSTM vectors as if s0L was not in the sentence. When there is more than one token at a specific position, for example s0L or c (i.e., children of the dependent), we pick a random one to drop. That way every ablated model looses information about at most one word. We note that several factors can be responsible for drops in performance of the ablated models. For example, the proposed augmentation distorts distance between tokens which might have an adverse impact on the trained representations. Therefore, in the following comparative analysis we interpret the obtained drops as an approximation of how much particular tokens influence the performance of the models. Transition-based parser. Figure 6a presents the drops in the parsing performance for the ab124 s0 s1 b0 s1R s0L b1 s0R s0L s0R s2R s1R s1L s1L b2 s2 60 65 70 75 80 LAS (a) Transition-based parser (TBMIN) h d c d±1 s h±1 d±2 g h±2 d±3 60 65 70 75 80 LAS (b) Graph-based parser (GBMIN) Figure 6: The performance drops when tokens at particular positions are removed from the BiLSTM encoding. The red line marks average LAS of uninterrupted model. Feature sets of both models are highlighted in green. lated models.9 First of all, removing the vectors { s0, s1, b0} (marked in green on the plot) only from the BiLSTM layer (although they are still used as features) causes visible drops in performance. One explanation might be that when the vector s0 is recalculated without knowledge of s1 the model loses information about the distance between them. Secondly, we can notice that other drops depend on both the impact and frequency of positions. The biggest declines are visible after removing s0L and s1R – precisely the positions which we found to have the highest impact on the parsing decisions. Interestingly, the positions which were not a part of the TBEXT feature set, such as s0L or s1R, although not frequent are important for the performance. Graph-based parser. Corresponding results for the graph-based parser are presented in Figure 6b (we use gold-standard trees as the source of information about structural relations between tokens). The biggest drop can be observed for ablated models without siblings. Clearly, information coming from those tokens implicitly into MLP is very important for the final parsing accuracy. The next two biggest drops are caused by lack of children and grandparents. As we showed in Figure 5b children, although less frequent, have a stronger impact on the decision of the parser. But dropping grandparents also significantly harms the models. We conclude that information about partial subtrees is not only present when the parser makes 9It is worth noting that not all of the models suffer from the ablation. For example, dropping vectors s2R causes almost no harm. This suggests that re-calculating the representations multiple times does not have a strong negative effect on training. final decisions but also strongly influences those decisions. Additionally, the deteriorated accuracy of the ablated models shows that the implicit structural context can not be easily compensated for. 6 Related Work Feature extraction. Kiperwasser and Goldberg (2016) and Cross and Huang (2016) first applied BiLSTMs to extract features for transition-based dependency parsers. The authors demonstrated that an architecture using only a few positional features (four for the arc-hybrid system and three for arc-standard) is sufficient to achieve state-ofthe-art performance. Shi et al. (2017) showed that this number can be further reduced to two features for arc-hybrid and arc-eager systems. Decreasing the size of the feature set not only allows for construction of lighter and faster neural networks (Wang and Chang, 2016; Vilares and G´omez-Rodr´ıguez, 2018) but also enables the use of exact search algorithms for several projective (Shi et al., 2017) and non-projective (G´omezRodr´ıguez et al., 2018) transition systems. A similar trend can be observed for graph-based dependency parsers. State-of-the-art models (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2016) typically use only two features of heads and dependents, possibly also incorporating their distance (Wang and Chang, 2016). Moreover, Wang and Chang (2016) show that arc-factored BiLSTM-based parsers can compete with conventional higher-order models in terms of accuracy. None of the above mentioned efforts address the question how dependency parsers are able to compensate for the lack of structural features. The very recent work by de Lhoneux et al. (2019) looked into this issue from a different perspec125 tive than ours – composition. They showed that composing the structural context with recursive networks as in Dyer et al. (2015) is redundant for the K&G transition-based architecture. The authors analyze components of the BiLSTMs to show which of them (forward v. backward LSTM) is responsible for capturing subtree information. RNNs and syntax. Recurrent neural networks, which BiLSTMs are a variant of, have been repeatedly analyzed to understand whether they can learn syntactic relations. Such analyses differ in terms of: (1) methodology they employ to probe what type of knowledge the representations learned and (2) tasks on which the representations are trained on. Shi et al. (2016) demonstrated that sequence-to-sequence machinetranslation systems capture source-language syntactic relations. Linzen et al. (2016) showed that when trained on the task of number agreement prediction the representations capture a nontrivial amount of grammatical structure (although recursive neural networks are better at this task than sequential LSTMs (Kuncoro et al., 2018)). Blevins et al. (2018) found that RNN representations trained on a variety of NLP tasks (including dependency parsing) are able to induce syntactic features (e.g., constituency labels of parent or grandparent) even without explicit supervision. Finally, Conneau et al. (2018) designed a set of tasks probing linguistic knowledge of sentence embedding methods. Our work contributes to this line of research in two ways: (1) from the angle of methodology, we show how to employ derivatives to pinpoint what syntactic relations the representations learn; (2) from the perspective of tasks, we demonstrate how BiLSTM-based dependency parsers take advantage of structural information encoded in the representations. In the case of constituency parsing Gaddy et al. (2018) offer such an analysis. The authors show that their BiLSTM-based models implicitly learn the same information which was conventionally provided to non-neural parsers, such as grammars and lexicons. 7 Discussion and Conclusion We examined how the application of BiLSTMs influences the modern transition- and graph-based parsing architectures. The BiLSTM-based parsers can compensate for the lack of traditional structural features. Specifically, the features drawn from partial subtrees become redundant because the parsing models encode them implicitly. The main advantage of BiLSTMs comes with their ability to capture not only surface but also syntactic relations. When the representations are trained together with a parser they encode structurally-advanced relations such as heads, children, or even siblings and grandparents. This structural information is then passed directly (through feature vectors) and indirectly (through BiLSTMs encoding) to MLP and is used for scoring transitions and arcs. Finally, the implicit structural information is important for the final parsing decisions: dropping it in ablated models causes their performance to deteriorate. The introduction of BiLSTMs into dependency parsers has an additional interesting consequence. The classical transition- and graph-based dependency parsers have their strengths and limitations due to the trade-off between the richness of feature functions and the inference algorithm (McDonald and Nivre, 2007). Our transition- and graph-based architectures use the same word representations. We showed that those representations trained together with the parsers capture syntactic relations in a similar way. Moreover, the transition-based parser does not incorporate structural features through the feature set. And the graph-based parser makes use of far away surface tokens but also structurally related words. Evidently, the employment of BiLSTM feature extractors blurs the difference between the two architectures. The one clear advantage of the graphbased parser is that it performs global inference (but exact search algorithms are already being applied to projective (Shi et al., 2017) and nonprojective (G´omez-Rodr´ıguez et al., 2018) transition systems). Therefore, an interesting question is if integrating those two architectures can still be beneficial for the parsing accuracy as in Nivre and McDonald (2008). We leave this question for future work. Acknowledgments This work was supported by the Deutsche Forschungsgemeinschaft (DFG) via the SFB 732, project D8. We would like to thank the anonymous reviewers for their comments. We also thank our colleagues Anders Bj¨orkelund, ¨Ozlem C¸ etino˘glu, and Xiang Yu for many conversations and comments on this work. 126 References Terra Blevins, Omer Levy, and Luke Zettlemoyer. 2018. Deep RNNs Encode Soft Hierarchical Syntax. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 14–19, Melbourne, Australia. Association for Computational Linguistics. Danqi Chen and Christopher Manning. 2014. A Fast and Accurate Dependency Parser using Neural Networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740–750. Association for Computational Linguistics. Yoeng-Jin Chu and Tseng-Hong Liu. 1965. On shortest arborescence of a directed graph. Scientia Sinica, 14(10):1396–1400. Alexis Conneau, Germ´an Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126–2136, Melbourne, Australia. Association for Computational Linguistics. James Cross and Liang Huang. 2016. Incremental Parsing with Minimal Features Using Bi-Directional LSTM. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 32–37. Association for Computational Linguistics. Timothy Dozat and Christopher D. Manning. 2016. Deep Biaffine Attention for Neural Dependency Parsing. CoRR, abs/1611.01734. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. TransitionBased Dependency Parsing with Stack Long ShortTerm Memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 334–343. Association for Computational Linguistics. Jack Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards B, 71(4):233–240. Jason M. Eisner. 1996. Three New Probabilistic Models for Dependency Parsing: An Exploration. In COLING 1996 Volume 1: The 16th International Conference on Computational Linguistics. David Gaddy, Mitchell Stern, and Dan Klein. 2018. What’s Going On in Neural Constituency Parsers? An Analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 999–1010. Association for Computational Linguistics. Carlos G´omez-Rodr´ıguez, Tianze Shi, and Lillian Lee. 2018. Global Transition-based Non-projective Dependency Parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2664– 2675, Melbourne, Australia. Association for Computational Linguistics. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures. Neural Networks, 18(5):602–610. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations. Transactions of the Association for Computational Linguistics, 4:313–327. Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom. 2018. LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426–1436. Association for Computational Linguistics. Miryam de Lhoneux, Miguel Ballesteros, and Joakim Nivre. 2019. Recursive subtree composition in lstm-based dependency parsing. arXiv preprint arXiv:1902.09781. Miryam de Lhoneux, Yan Shao, Ali Basirat, Eliyahu Kiperwasser, Sara Stymne, Yoav Goldberg, and Joakim Nivre. 2017. From raw text to universal dependencies - look, no tags! In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 207–217. Association for Computational Linguistics. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntaxsensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521–535. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online Large-Margin Training of Dependency Parsers. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 91–98. Association for Computational Linguistics. Ryan McDonald and Joakim Nivre. 2007. Characterizing the Errors of Data-Driven Dependency Parsing Models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). 127 Ryan McDonald and Fernando Pereira. 2006. Online Learning of Approximate Dependency Parsing Algorithms. In 11th Conference of the European Chapter of the Association for Computational Linguistics. Thomas Mueller, Helmut Schmid, and Hinrich Sch¨utze. 2013. Efficient higher-order crfs for morphological tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 322–332. Association for Computational Linguistics. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. DyNet: The Dynamic Neural Network Toolkit. CoRR, abs/1701.03980. Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together, pages 50–57, Barcelona, Spain. Joakim Nivre. 2009. Non-Projective Dependency Parsing in Expected Linear Time. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 351–359. Association for Computational Linguistics. Joakim Nivre, Marco Kuhlmann, and Johan Hall. 2009. An Improved Oracle for Dependency Parsing with Online Reordering. In Proceedings of the 11th International Conference on Parsing Technologies (IWPT’09), pages 73–76. Association for Computational Linguistics. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A Multilingual Treebank Collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Resources Association (ELRA). Joakim Nivre and Ryan McDonald. 2008. Integrating Graph-Based and Transition-Based Dependency Parsers. In Proceedings of ACL-08: HLT, pages 950–958, Columbus, Ohio. Association for Computational Linguistics. Emily Pitler. 2014. A Crossing-Sensitive Third-Order Factorization for Dependency Parsing. Transactions of the Association for Computational Linguistics, 2:41–54. Nils Reimers and Iryna Gurevych. 2018. Why comparing single performance scores does not allow to draw conclusions about machine learning approaches. arXiv preprint arXiv:1803.09578. Tianze Shi, Liang Huang, and Lillian Lee. 2017. Fast(er) Exact Decoding and Global Training for Transition-Based Dependency Parsing via a Minimal Feature Set. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 12–23. Association for Computational Linguistics. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does String-Based Neural MT Learn Source Syntax? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1526–1534. Association for Computational Linguistics. Aaron Smith, Miryam de Lhoneux, Sara Stymne, and Joakim Nivre. 2018. An Investigation of the Interactions Between Pre-Trained Word Embeddings, Character Models and POS Tags in Dependency Parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2711–2720. Association for Computational Linguistics. David Vilares and Carlos G´omez-Rodr´ıguez. 2018. Transition-based Parsing with Lighter Feed-Forward Networks. In Proceedings of the Second Workshop on Universal Dependencies (UDW 2018), pages 162–172. Association for Computational Linguistics. Wenhui Wang and Baobao Chang. 2016. Graph-based Dependency Parsing with Bidirectional LSTM. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2306–2315, Berlin, Germany. Association for Computational Linguistics. Daniel Zeman, Jan Hajiˇc, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1–21, Brussels, Belgium. Association for Computational Linguistics. Yue Zhang and Joakim Nivre. 2011. Transition-based Dependency Parsing with Rich Non-local Features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 188–193. Association for Computational Linguistics. Zhisong Zhang and Hai Zhao. 2015. High-order Graph-based Neural Dependency Parsing. In Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation, pages 114– 123. 128 A Appendix Word embedding dimension 100 POS tag embedding dimension 20 Hidden units in MLP 100 BiLSTM layers 2 BiLSTM dimensions 125 α for word dropout 0.25 Trainer Adam Non-lin function tanh Table 2: Hyperparameters for the parsers. en-ptb ar en fi grc he ko ru sv zh TBMIN 0.237 0.323 0.207 0.163 0.382 0.391 0.740 0.282 0.295 0.398 TBEXT 0.211 0.191 0.176 0.323 0.472 0.454 0.456 0.408 0.257 0.267 GBMIN 0.146 0.179 0.212 0.157 0.340 0.269 0.300 0.228 0.379 0.408 GBSIBL 0.103 0.186 0.149 0.219 0.372 0.229 0.163 0.169 0.195 0.441 Table 3: Standard deviation for results in Table 1. 1 2 3 4 5 6 7 8 9 10 11 12 13 1415+ Dependency length 0 20k 40k 60k 80k 100k Bin size 0.5 0.6 0.7 0.8 0.9 Precision GbMin GbSibl TbMin TbExt Figure 7: Dependency precision relative to arc length on development sets.
2019
12
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1246–1257 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1246 Effective Cross-lingual Transfer of Neural Machine Translation Models without Shared Vocabularies Yunsu Kim Yingbo Gao Hermann Ney Human Language Technology and Pattern Recognition Group RWTH Aachen University, Aachen, Germany {surname}@cs.rwth-aachen.de Abstract Transfer learning or multilingual model is essential for low-resource neural machine translation (NMT), but the applicability is limited to cognate languages by sharing their vocabularies. This paper shows effective techniques to transfer a pre-trained NMT model to a new, unrelated language without shared vocabularies. We relieve the vocabulary mismatch by using cross-lingual word embedding, train a more language-agnostic encoder by injecting artificial noises, and generate synthetic data easily from the pre-training data without back-translation. Our methods do not require restructuring the vocabulary or retraining the model. We improve plain NMT transfer by up to +5.1% BLEU in five low-resource translation tasks, outperforming multilingual joint training by a large margin. We also provide extensive ablation studies on pre-trained embedding, synthetic data, vocabulary size, and parameter freezing for a better understanding of NMT transfer. 1 Introduction Despite recent success of neural machine translation (NMT) (Bahdanau et al., 2015; Vaswani et al., 2017), its major improvements and optimizations cannot be easily applied to low-resource language pairs. Basic training procedure of NMT does not function well with only a handful of bilingual data (Koehn and Knowles, 2017), while collecting bilingual resource is arduous for many languages. Multilingual NMT solves the problem of lacking bilingual data by training a shared model along with other related languages (Firat et al., 2016; Johnson et al., 2017). For this to work in practice, however, we need a considerable effort to gather bilingual data over multiple languages and preprocess them jointly before training. This has two critical issues: 1) The languages for training should be linguistically related in order to build a shared vocabulary. 2) It is not feasible to add a new language to a trained model, since the training vocabulary must be redefined; one may need to re-train the model from scratch. In transfer learning (Zoph et al., 2016), adapting to a new language is conceptually simpler; given an NMT model pre-trained on a high-resource language pair (parent), we can just continue the training with bilingual data of another language pair (child). Here, the vocabulary mismatch between languages is still a problem, which seriously limits the performance especially for distant languages. This work proposes three novel ideas to make transfer learning for NMT widely applicable to various languages: • We alleviate the vocabulary mismatch between parent and child languages via crosslingual word embedding. • We train a more general encoder in the parent training by injecting artificial noises, making it easier for the child model to adapt to. • We generate synthetic data from parallel data of the parent language pair, improving the low-resource transfer where the conventional back-translation (Sennrich et al., 2016b) fails. These techniques give incremental improvements while we keep the transfer unsupervised, i.e. it does not require bilingual information between the transferor and the transferee. Note that adapting to a new language is done without shared vocabularies; we need neither to rearrange joint subword units nor to restart the parent model training. Experiments show that our methods offer significant gain in translation performance up to +5.1% BLEU over plain transfer learning, even when transferring to an unrelated, low-resource 1247 language. The results significantly outperform multilingual joint training (Johnson et al., 2017) in all of our experiments. We also provide in-depth analyses of the following aspects to understand the behavior of NMT transfer and maximize its performance: type of the pre-trained embedding, synthetic data generation methods, size of the transferred vocabulary, and parameter freezing. 2 Neural Machine Translation Before describing our transfer learning approach, this section covers basics of an NMT model. Explanations here are not based on a specific architecture but extendable to more complex model variants. For a source sentence fJ 1 = f1, ..., fj, ..., fJ (length J) and a corresponding target sentence eI 1 = e1, ..., ei, ..., eI (length I), NMT models the probability p(eI 1|fJ 1 ) with several components: source/target word embeddings, an encoder, a decoder, and an output layer. Source word embedding Esrc maps a discrete word f (as a one-hot vector) to a continuous representation (embedding) of that word Esrc(f). In practice, it is implemented by a lookup table and stored in a matrix in RD×V src, where D is the number of dimensions of the embedding. Target word embedding is analogous. An encoder takes a sequence of source word embeddings Esrc(fJ 1 ) and produces a sequence of hidden representations hJ 1 for the source sentence. The encoder can be modeled with recurrent (Sutskever et al., 2014), convolutional (Gehring et al., 2017), or self-attentive layers (Vaswani et al., 2017). The encoder is responsible for modeling syntactic and semantic relationships among the source words, including word order. A decoder generates target words for each target position i from its internal state si, which depends on hJ 1 , Etgt(ei−1), and si−1. It keeps track of the generated hypothesis up to position i-1 and relates the generation with source representations hJ 1 . For shared vocabularies between source and target languages, the target embedding weights can be tied with the source embedding weights, i.e. Esrc = Etgt. The model is trained on a parallel corpus by optimizing for the cross-entropy loss with the stochastic gradient descent algorithm. Translation is carried out with a beam search. For more details, we refer the reader to Bahdanau et al. (2015) and Vaswani et al. (2017). 3 Transfer Learning for NMT In general, transfer learning is reusing the knowledge from other domains/tasks when facing a new problem (Thrun and Pratt, 2012). It has been of continued interest in machine learning for the past decades, especially when there is not enough training data for the problem at hand. Much attention is given to transfer learning for neural networks, since hidden layers of the network can implicitly learn general representations of data; the knowledge can be readily transferred by copying the hidden layer weights to another network (Caruana, 1995; Bengio, 2012). For NMT, the easiest case of transfer learning is across text domains. Having an NMT model trained on some data, we can continue the training from the same network parameters with data from another domain (Luong and Manning, 2015; Freitag and Al-Onaizan, 2016). Transfer from another natural language processing task is also straightforward; for example, we can initialize the parameters of NMT models with pre-trained language models of corresponding languages, since the encoder and decoder are essentially language models except a few additional translation-specific components (Ramachandran et al., 2017; Lample and Conneau, 2019). German Encoder English Decoder Basque Encoder English Decoder Pre-train Fine-tune Copy Parameters Copy Parameters Figure 1: Diagram of transfer learning for NMT from German→English to Basque→English. However, it is inherently difficult to transfer NMT models between languages, i.e. pre-train a model for a high-resource language pair and use the trained parameters for a low-resource language pair (Figure 1). Changing a language introduces a completely different data space that does not fit to the pre-trained model. In the following, we describe this discrepancy in detail and propose 1248 our solutions. We focus on switching source languages, while the target language is fixed. 3.1 Cross-lingual Word Embedding The biggest challenge of cross-lingual transfer is the vocabulary mismatch. A natural language vocabulary is discrete and unique for each language, while the mapping between two different vocabularies is non-deterministic and arbitrary. Therefore, when we merely replace a source language, the NMT encoder will see totally different input sequences; pre-trained encoder weights do not get along with the source embedding anymore. A popular solution to this is sharing the vocabulary among the languages of concern (Nguyen and Chiang, 2017; Kocmi and Bojar, 2018). This is often implemented with joint learning of subword units (Sennrich et al., 2016c). Despite its effectiveness, it has an intrinsic problem in practice: A parent model must be trained already with a shared vocabulary with child languages. Such a pre-trained parent model can be transferred only to those child languages using the same shared vocabulary. When we adapt to a new language whose words are not included in the shared vocabulary, we should learn a joint subword space again with the new language and retrain the parent model accordingly—very inefficient and not scalable. A shared vocabulary is also problematic in that it must be divided into language-specific portions. When many languages share it, an allocated portion for each will be smaller and accordingly less expressive. This is the reason why the vocabulary is usually shared only for linguistically related languages, effectively increasing the portion of common surface forms. In this work, we propose to keep the vocabularies separate, but share their embedding spaces instead of surface forms. This can be done independently from the parent model training and requires only monolingual data of the child language: 1. Learn monolingual embedding of the child language Emono child , using e.g. the skip-gram algorithm (Mikolov et al., 2013). 2. Extract source embedding Esrc parent from a pre-trained parent NMT model. 3. Learn a cross-lingual linear mapping W ∈ RD×D between 1 and 2 by minimizing the German Encoder Basque Encoder Copy Parameters German Embedding Basque Embedding W Figure 2: Cross-lingual mapping of a child (Basque) embedding to the parent (German) embedding. objective below: X (f,f′)∈S ∥WEmono child (f) −Esrc parent(f′)∥2 (1) 4. Replace source embedding of the parent model parameters with the learned crosslingual embedding. Esrc parent ←WEmono child (2) 5. Initialize the child model with 4 and start the NMT training on the child language pair. The dictionary S in Step 3 can be obtained in an unsupervised way by adversarial training (Conneau et al., 2018) or matching digits between the parent and child languages (Artetxe et al., 2017). The mapping W can be also iteratively refined with self-induced dictionaries of mutual parentchild nearest neighbors (Artetxe et al., 2017), which is still unsupervised. The cross-lingually mapped child embeddings fit better as input to the parent encoder, since they are adjusted to a space similar to that of the parent input embeddings (Figure 2). Note that in Step 4, the mapping W is not explicitly inserted as additional parameters in the network. It is multiplied by Emono child and the result is used as the initial source embedding weights. The initialized source embedding is also fine-tuned along with the other parameters in the last step. These steps do not involve rearranging a joint vocabulary or retraining of the parent model. Using our method, one can pre-train a single parent model once and transfer it to many different child languages efficiently. Our method is also effective for non-related languages that do not share surface forms, since we address the vocabulary mismatch in the embedding level. After each word is converted to its embedding, it is just a continuous-valued vector in a mathematical space; matching vocabularies is done by transforming the vectors irrespective of language-specific alphabets. 1249 German Encoder Ich hier arbeite . Ich arbeite hier . Noise Figure 3: Injecting noise into a German (parent) source sentence. 3.2 Artificial Noises Another main difference between languages is the word order, namely syntactic structure of sentences. Neural sequence-to-sequence models are highly dependent on sequential ordering of the input, i.e. absolute/relative positions of input tokens. When we train an encoder for a language, it learns the language-specific word order conventions, e.g. position of a verb in a clause, structure of an adverb phrase, etc. If the input language is changed, the encoder should adjust itself to unfamiliar word orders. The adaptation gets more difficult for non-related languages. To mitigate this syntactic difference in crosslingual transfer for NMT, we suggest to generalize the parent encoder so that it is not overoptimized to the parent source language. We achieve this by modifying the source side of the parent training data, artificially changing its word orders with random noises (Figure 3). The noise function includes (Hill et al., 2016; Kim et al., 2018): • Inserting a word between original words uniformly with a probability pins at each position, choosing the inserted word uniformly from the top Vins frequent words • Deleting original words uniformly with a probability pdel at each position • Permuting original word positions uniformly within a limited distance dper The noises are injected into every source sentence differently for each epoch. The encoder then sees not only word orders of the parent source language but also other various sentence structures. Since we set limits to the randomness of the noises, the encoder is still able to learn general monotonicity of natural language sentences. This makes it easier for the parent encoder to adapt to a child source language, effectively transferring the pre-trained language-agnostic knowledge of input sequence modeling. 3.3 Synthetic Data from Parent Model Training Data Transfer learning for NMT is particularly necessary for low-resource language pairs where the bilingual data is scarce. The standard technique to address the scarcity is generating synthetic parallel data from target monolingual corpora via backtranslation (Sennrich et al., 2016b). However, this works only if the generated source sentences are of sufficiently acceptable quality. In low-resource translation tasks, it is hard to train a good target-tosource translation model, which is used to produce the source hypotheses. For these scenarios, we devise a simple trick to create additional parallel data for the child language pair without training a target-to-source translation model. The idea is to reuse the parallel data already used for training the parent model. In the source side, we retain only those tokens that exist in the child vocabulary and replace all other tokens with a predefined token, e.g. <unk> (Figure 4). The target side stays the same as we do not switch the languages. Basque Encoder <unk> , John ! Hallo , John ! (Basque) (German) Basque Vocabulary Figure 4: Synthetic Basque sentence generated from a German sentence. The source side of this synthetic data consists only of the overlapping vocabulary entries between the parent and child languages. By including this data in the child model training, we prevent an abrupt change of the input to the pretrained model while keeping the parent and child vocabularies separated. It also helps to avoid overfitting to a tiny parallel data of the child language pair. In addition, we can expect a synergy with crosslingual word embedding (Section 3.1), where the source embedding space of the child task is transformed into that of the parent task. In this crosslingual space, an overlapping token between parent and child vocabularies should have a very similar embedding to that in the original parent embedding space, to which the pre-trained encoder is already familiar. This helps to realize a smooth 1250 Source Data (→English) Family Language [#sents] Germanic German 10,111,758 Isolate Basque 5,605 Slavic Slovenian 17,103 Belarusian 4,509 Turkic Azerbaijani 5,946 Turkish 9,998 Table 1: Language families and parallel data statistics. transition from parent source input to child source input in the transfer process. 4 Main Results We verify the effect of our techniques in transfer learning setups with five different child source languages: Basque (eu), Slovenian (sl), Belarusian (be), Azerbaijani (az), and Turkish (tr). Target language is fixed to English (en) and we use German→English as the parent language pair. Data: The parent model was trained on parallel data of WMT 2018 news translation task1 and synthetic data released by Sennrich et al. (2016a). For the child language pairs, we used IWSLT 2018 low-resource MT task data (eu-en) (Jan et al., 2018), IWSLT 2014 MT task data (sl-en) (Cettolo et al., 2014), TED talk data from (Qi et al., 2018) (be-en/az-en), and subsampling of WMT 2018 news translation task data (tr-en). Statistics of the parallel corpora are given in Table 1. Note that the child source languages are linguistically far from the parent source. Every training dataset was preprocessed with the Moses tokenizer2, where the source side was lowercased and the target side was frequent-cased. Transfer learning: All NMT models in our experiments follow the base 6-layer Transformer architecture of Vaswani et al. (2017), except that the source and target embedding weights are not tied. Each source language was encoded with byte pair encoding (BPE) (Sennrich et al., 2016c) with 20k merge operations, while the target language was encoded with 50k BPE merges. Dropout with probability of 0.3 was applied to Transformer prepost/activation/attention components in both par1http://www.statmt.org/wmt18/translation-task.html 2http://www.statmt.org/moses/ ent and child model trainings. Training was carried out with Sockeye (Hieber et al., 2017) using the Adam optimizer (Kingma and Ba, 2014) with the default parameters. The maximum sentence length was set to 100 and the batch size to 4,096 words. We stopped the training when perplexity on a validation set was not improving for 12 checkpoints. We set checkpoint frequency to 10,000 updates for the parent model and 1,000 updates for the child models. The parent model yields 39.2% BLEU on WMT German→English newstest2016 test set. Baseline: As a baseline child model without transfer learning, we used the same setting as above but learned a shared source-target BPE vocabulary with 20k merge operations. We also tied source and target embeddings as suggested for low-resource settings in Schamper et al. (2018). Dropout was applied also to the embedding weights for the baselines. Multilingual: We also compare our transfer learning with the multilingual training where a single, shared NMT model is trained for the parent and child language pairs together from scratch (Johnson et al., 2017). For each child task, we learned a joint BPE vocabulary of all source and target languages in the parent/child tasks with 32k merge operations. The training data for the child task was oversampled so that each mini-batch has roughly 1:1 ratio of the parent/child training examples. Note that we built a different multilingual model for each child task. Since they depend on shared vocabularies, we should restructure the vocabulary and retrain the model for each of the new language pairs we wish to adapt to. Cross-lingual word embedding: To pre-train word embeddings, we used Wikimedia dumps3 of timestamp 2018-11-01 for all child languages except Turkish for which we used WMT News Crawl 2016-2017. From Wikimedia dumps, the actual articles were extracted first4, which were split to sentences using the StanfordCoreNLP toolkit (Manning et al., 2014). Monolingual embeddings were trained with fasttext (Bojanowski et al., 2017) with minimum word count 0. For learning the cross-lingual mappings, we ran 10 epochs of adversarial training and another 10 epochs of dictionary-based refinement using MUSE (Con3https://dumps.wikimedia.org/ 4https://github.com/attardi/wikiextractor/ 1251 BLEU [%] System eu-en sl-en be-en az-en tr-en Baseline 1.7 10.1 3.2 3.1 0.8 Multilingual (Johnson et al., 2017) 5.1 16.7 4.2 4.5 8.7 Transfer (Zoph et al., 2016) 4.9 19.2 8.9 5.3 7.4 + Cross-lingual word embedding 7.4 20.6 12.2 7.4 9.4 + Artificial noises 8.2 21.3 12.8 8.1 10.1 + Synthetic data 9.7 22.1 14.0 9.0 11.3 Table 2: Translation results of different transfer learning setups. neau et al., 2018). We chose top 20k types as discriminator inputs and 10k as maximum dictionary rank. Artificial noises: Following Kim et al. (2018), we used these values for the noise model: pins = 0.1, Vins = 50, pdel = 0.1, and dper = 3. We empirically found that these values are optimal also for our purpose. The parent model trained with noises gives 38.2% BLEU in WMT German→English newstest2016: 1.0% worse than without noises. Synthetic data: We uniformly sampled 1M sentence pairs from German→English parallel data used for the parent training and processed them according to Section 3.3. The child model parallel data was oversampled to 500k sentence pairs, making an overall ratio of 1:2 between the parallel and synthetic data. We also tried other ratio values, e.g. 1:1, 1:4, or 2:1, but the performance was consistently worse. Table 2 presents the results. Plain transfer learning already gives a boost but is still far from a satisfying quality, especially for Basque→-English and Azerbaijani→English. On top of that, each of our three techniques offers clear, incremental improvements in all child language pairs with a maximum of 5.1% BLEU in total. Cross-lingual word embedding shows a huge improvement up to +3.3% BLEU, which exhibits the strength of connecting parent-child vocabularies on the embedding level. If we train the parent model with artificial noises on the source side, the performance is consistently increased by up to +0.8% BLEU. This occurs even when dropout is used in the parent model training; randomizing word orders provides meaningful regularization which cannot be achieved via dropout. Finally, our synthetic data extracted from the parent parallel data is proved to be effective in low-resource transfer to substantially different languages: We obtain an additional gain of at most +1.5% BLEU. Our results also surpass the multilingual joint training by a large margin in all tasks. One shared model for multiple language pairs inherently limits the modeling capacity for each task. Particularly, if one language pair has much smaller training data than the other, oversampling the lowresource portion is not enough to compensate the scale discrepancy in multilingual training. Transfer learning with our add-on techniques is more efficient to exploit knowledge of high-resource language pairs and fine-tune the performance towards a child task. 5 Analysis In this section, we further investigate our methods in detail in comparison to their similar variants, and also perform ablation studies for the NMT transfer in general. 5.1 Types of Pre-trained Embedding Pre-trained embedding BLEU [%] None 5.3 Monolingual 6.3 Cross-lingual (az-de) 7.4 Cross-lingual (az-en) 7.1 Table 3: Azerbaijani→English translation results with different types of pre-trained source embeddings. We analyze the effect of the cross-linguality of pre-trained embeddings in Table 3. We observe that monolingual embedding without a crosslingual mapping also improves the transfer learning, but is significantly worse than our proposed embedding, i.e. mapped to the parent source (de) 1252 embedding. The mapping can be learned also with the target (en) side with the same procedure as in Section 3.1. The target-mapped embedding is not compatible with the pre-trained encoder but directly guides the child model to establish the connection between the new source and the target. It also improves the system, but our method is still the best among the three embedding types. 5.2 Synthetic Data Generation Synthetic data BLEU [%] None 8.2 Back-translation 8.3 Empty source 8.2 Copied target 8.9 Parent model data 9.7 + Cross-lingual replacement 8.7 Table 4: Basque→English translation results with synthetic data generated using different methods. In Table 4, we compare our technique in Section 3.3 with other methods of generating synthetic data. For a fair comparison, we used the same target side corpus (1M sentences) for all these methods. As explained in Section 3.3, back-translation (Sennrich et al., 2016b) is not beneficial here because the generated source is of too low quality. Empty source sentence is proposed along with back-translation as its simplification, which does not help either in transfer learning. Copying target sentences to the source side is yet another easy way to obtain synthetic data (Currey et al., 2017). It gives an improvement to a certain extent; however, our method of using the parent model data works much better in transfer learning. We manually looked at the survived tokens in the source side of our synthetic data. We observed lots of overlapping tokens over the parent and child source vocabularies even if they were not shared: 4,487 vocabulary entries between Basque and German. Approximately 2% of them are punctuation symbols and special tokens, 7% are digits, and 62% are made of Latin alphabets, a large portion of which is devoted to English words (e.g. named entities) or their parts. The rest of the vocabulary is mostly of noisy tokens with exotic alphabets. As Figure 4 illustrates, just punctuation symbols and named entities can already define a basic structure of the original source sentence. Such tokens play the role of anchors in translation; they are sure to be copied to the target side. The surrounding <unk> tokens are spread according to the source language structure, whereas merely copying the target sentence to the source (Currey et al., 2017) ignores the structural difference between source and target sentences. Note that our trick applies also to the languages with completely different alphabets, e.g. Belarusian and German (see Table 2). We also tested an additional processing for our synthetic data to reduce the number of unknown tokens. We replaced non-overlapping tokens in the German source side with the closest Basque token in the cross-lingual word embedding space. The result is, however, worse than not replacing them; we noticed that this subword-by-subword translation produces many Basque phrases with wrong BPE merges (Kim et al., 2018). 5.3 Vocabulary Size BLEU [%] BPE merges sl-en be-en 10k 21.0 11.2 20k 20.6 12.2 50k 20.2 10.9 70k 20.0 10.9 Table 5: Translation results with different sizes of the source vocabulary. Table 5 estimates how large the vocabulary should be for the language-switching side in NMT transfer. We varied the number of BPE merges on the source side, fixing the target vocabulary to 50k merges. The best results are with 10k or 20k of BPE merges, which shows that the source vocabulary should be reasonably small to maximize the transfer performance. Less BPE merges lead to more language-independent tokens; it is easier for the cross-lingual embedding to find the overlaps in the shared semantic space. If the vocabulary is excessively small, we might lose too much language-specific details that are necessary for the translation process. This is shown in the 10k merges of Belarusian→English. 5.4 Freezing Parameters Lastly, we conducted an ablation study of freezing parent model parameters in the child training 1253 Frozen parameters BLEU [%] None 21.0 Target embedding 21.4 + Target self-attention 22.1 + Encoder-decoder attention 21.8 + Feedforward sublayer 21.3 + Output layer 21.9 Table 6: Slovenian→English translation results with freezing different components of the decoder. process (Table 6). We show only the results when freezing the decoder; in our experiments, freezing any component of the encoder always degrades the translation performance. The experiments were done at the final stage with all of our three proposed methods applied. Target embedding and target self-attention parts are independent of the source information, so it makes sense to freeze those parameters even when the source language is changed. On the contrary, encoder-decoder attention represents the relation between source and target sentences, so it should be redefined for a new source language. The performance deteriorates when freezing feedforward sublayers, since it is directly influenced by the encoder-decoder attention layer. The last row means that we freeze all parameters of the decoder; it is actually better than freezing all but the output layer. 6 Related Work Transfer learning is first introduced for NMT in Zoph et al. (2016), yet with a small RNN architecture and on top frequent words instead of using subword units. Nguyen and Chiang (2017) and Kocmi and Bojar (2018) use shared vocabularies of BPE tokens to improve the transfer learning, but this requires retraining of the parent model whenever we transfer to a new child language. Multilingual NMT trains a single model with parallel data of various translation directions jointly from scratch (Dong et al., 2015; Johnson et al., 2017; Firat et al., 2016; Gu et al., 2018). Their methods also rely on shared subword vocabularies so it is hard for their model to adapt to a new language. Cross-lingual word embedding is studied for the usages in MT as follows. In phrase-based SMT, Alkhouli et al. (2014) builds translation models with word/phrase embeddings. Kim et al. (2018) uses cross-lingual word embedding as a basic translation model for unsupervised MT and attach other components on top of it. Artetxe et al. (2018c) and Lample et al. (2018a) initialize their unsupervised NMT models with pre-trained crosslingual word embeddings. Qi et al. (2018) do the same initialization for supervised cases, observing only improvements in multilingual setups. Artificial noises for the source sentences are used to counteract word-by-word training data in unsupervised MT (Artetxe et al., 2018c; Lample et al., 2018a; Kim et al., 2018), but in this work, they are used to regularize the NMT. Neubig and Hu (2018) study adapting a multilingual NMT system to a new language. They train for a child language pair with additional parallel data of its similar language pair. Our synthetic data method does not rely on the relatedness of languages but still shows a good performance. They learn just a separate subword vocabulary for the child language without a further care, which we counteract with cross-lingual word embedding. Sachan and Neubig (2018) show ablation studies on parameter sharing and freezing in one-tomany multilingual setup with shared vocabularies. Our work conduct the similar experiments in the transfer learning setting with separate vocabularies. Platanios et al. (2018) augment a multilingual model with language-specific embeddings from which the encoder and decoder parameters are inferred with additional linear transformations. They only mention its potential to transfer to an unseen language without any results on it. Our work focuses on transferring a pre-trained model to a new language without any change in the model architecture but with an explicit guidance for cross-linguality on the word embedding level. Wang et al. (2019) address the vocabulary mismatch in multilingual NMT by using shared embeddings of character n-grams and common semantic concepts. Their method has a strict assumption that the languages should be related orthographically with shared alphabets, while our method is not limited to similar languages and directly benefits from advances in cross-lingual word embedding for distant languages. Another line of research on low-resource MT is unsupervised learning (Lample et al., 2018a,b; Lample and Conneau, 2019; Artetxe et al., 1254 2018b,c; Kim et al., 2018), training translation models only with monolingual data. However, these methods are verified mostly in high-resource language pairs, e.g. French↔English, where there is no need to restrict the training data to only monolingual corpora. In low-resource language pairs with little linguistic similarity, Neubig and Hu (2018) and Guzm´an et al. (2019) show that unsupervised MT methods do not function at all. We tested an unsupervised MT software Lample and Conneau (2019) internally, which also resulted in failure, e.g. 1% BLEU at the Basque→English task of Section 4. Moreover, unsupervised MT methods usually require a very long training time—at least 1-2 weeks with a single GPU—due to its iterative nature, while our cross-lingual transfer needs only a couple of hours of training once you have a parent model. Alternatively, one might consider using parallel data involving a pivot language, either by decoding in two consecutive steps (Kauers et al., 2002; De Gispert and Marino, 2006; Utiyama and Isahara, 2007; Costa-Juss`a et al., 2011) or by creating pivot-based synthetic data (De Gispert and Marino, 2006; Bertoldi et al., 2008; Zheng et al., 2017; Chen et al., 2017). These methods cannot be applied to most of the language pairs from/to English, because it is extremely difficult to collect parallel data with another third language other than English. 7 Conclusion In this paper, we address the problem of transferring an NMT model to unseen, unrelated language pairs. We propose three novel techniques to improve the transfer without vocabulary sharing between parent and child source languages. Firstly, we transform monolingual embeddings of the new language into the embedding space of the parent NMT model. This accomplishes an effective transition of vocabularies on the embedding level. Secondly, we randomize the word orders in the parent model training to avoid overfitting to the parent source language. This makes it easier for the encoder to adapt to the new language syntax. For the first time, we show a practical usage of artificial noises to regularize an NMT model. Lastly, we reuse parallel data of the parent language pair in the child training phase to avoid an abrupt change of the training data distribution. All three methods significantly improve over plain transfer learning with a total gain of up to +5.1% BLEU in our experiments, consistently outperforming multilingual joint training. Our methods do not require retraining of a shared vocabulary or the parent model, enabling an incremental transfer of the same parent model to various (possibly unrelated) languages. Our implementation of the proposed methods is available online.5 As for future work, we will test our methods in the NMT transfer where the target language is switched. We also plan to compare different algorithms for learning the cross-lingual mapping (Artetxe et al., 2018a; Xu et al., 2018; Joulin et al., 2018) to optimize the transfer performance. Acknowledgments This work has received funding from the European Research Council (ERC) (under the European Union’s Horizon 2020 research and innovation programme, grant agreement No 694537, project ”SEQCLAS”) and the Deutsche Forschungsgemeinschaft (DFG; grant agreement NE 572/8-1, project ”CoreTec”). The GPU cluster used for the experiments was partially funded by DFG Grant INST 222/1168-1. The work reflects only the authors’ views and none of the funding agencies is responsible for any use that may be made of the information it contains. References Tamer Alkhouli, Andreas Guta, and Hermann Ney. 2014. Vector space models for phrase-based machine translation. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 1–10. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), volume 1, pages 451–462. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 789–798. 5https://github.com/yunsukim86/sockeye-transfer 1255 Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632–3642. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018c. Unsupervised neural machine translation. In Proceedings of 6th International Conference on Learning Representations (ICLR 2018). Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 2015 International Conference on Learning Representations (ICLR 2015). Yoshua Bengio. 2012. Deep learning of representations for unsupervised and transfer learning. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning, pages 17–36. Nicola Bertoldi, Madalina Barbaiani, Marcello Federico, and Roldano Cattoni. 2008. Phrase-based statistical machine translation with pivot languages. In International Workshop on Spoken Language Translation (IWSLT) 2008. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Rich Caruana. 1995. Learning many related tasks at the same time with backpropagation. In Advances in neural information processing systems, pages 657– 664. Mauro Cettolo, Jan Niehues, Sebastian St¨uker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th iwslt evaluation campaign, iwslt 2014. In Proceedings of the 11th International Workshop on Spoken Language Translation (IWSLT 2011), pages 2–17, Hanoi, Vietnam. Yun Chen, Yang Liu, Yong Cheng, and Victor OK Li. 2017. A teacher-student framework for zeroresource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1925–1935. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In Proceedings of 6th International Conference on Learning Representations (ICLR 2018). Marta R Costa-Juss`a, Carlos Henr´ıquez, and Rafael E Banchs. 2011. Enhancing scarce-resource language translation through pivot combinations. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 1361–1365. Anna Currey, Antonio Valerio Miceli Barone, and Kenneth Heafield. 2017. Copied monolingual data improves low-resource neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 148–156. Adri`a De Gispert and Jose B Marino. 2006. Catalanenglish statistical machine translation without parallel corpus: bridging through spanish. In 5th International Conference on Language Resources and Evaluation (LREC), pages 65–68. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1723–1732. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. In Proceedings of the 15th Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2016), pages 866–875. Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. arXiv:1612.06897. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning (ICML 2017), pages 1243–1252. Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor OK Li. 2018. Universal neural machine translation for extremely low resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 344–354. Francisco Guzm´an, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc’Aurelio Ranzato. 2019. Two new evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english. arXiv:1902.01382. Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A toolkit for neural machine translation. arXiv:1712.05690. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2016), pages 1367–1377. 1256 Niehues Jan, Roldano Cattoni, St¨uker Sebastian, Mauro Cettolo, Marco Turchi, and Marcello Federico. 2018. The iwslt 2018 evaluation campaign. In Proceedings of the 15th International Workshop on Spoken Language Translation (IWSLT 2018), pages 2–6. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, et al. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association of Computational Linguistics (TACL), 5(1):339–351. Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv´e J´egou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2979–2984. Manuel Kauers, Stephan Vogel, Christian F¨ugen, and Alex Waibel. 2002. Interlingua based statistical machine translation. In Proceedings of the 7th International Conference on Spoken Language Processing (ICSLP 2002). Yunsu Kim, Jiahui Geng, and Hermann Ney. 2018. Improving unsupervised word-by-word translation with language model and denoising autoencoder. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 862–868. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv:1412.6980. Tom Kocmi and Ondˇrej Bojar. 2018. Trivial transfer learning for low-resource neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 244– 252. Association for Computational Linguistics. Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the 1st ACL Workshop on Neural Machine Translation (WNMT 2017), pages 28–39. Guillaume Lample and Alexis Conneau. 2019. Cross-lingual language model pretraining. arXiv:1901.07291. Guillaume Lample, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In Proceedings of 6th International Conference on Learning Representations (ICLR 2018). Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, et al. 2018b. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049. Minh-Thang Luong and Christopher D Manning. 2015. Stanford neural machine translation systems for spoken language domains. In Proceedings of the 12th International Workshop on Spoken Language Translation (IWSLT 2015), pages 76–79. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv:1301.3781. Graham Neubig and Junjie Hu. 2018. Rapid adaptation of neural machine translation to new languages. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 875–880. Toan Q Nguyen and David Chiang. 2017. Transfer learning across low-resource, related languages for neural machine translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 296–301. Emmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, and Tom Mitchell. 2018. Contextual parameter generation for universal neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 425–435. Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 529–535. Prajit Ramachandran, Peter Liu, and Quoc Le. 2017. Unsupervised pretraining for sequence to sequence learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 383–391. Devendra Sachan and Graham Neubig. 2018. Parameter sharing methods for multilingual self-attentional translation models. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 261–271. Association for Computational Linguistics. Julian Schamper, Jan Rosendahl, Parnia Bahar, Yunsu Kim, Arne Nix, and Hermann Ney. 2018. The rwth aachen university supervised machine translation systems for wmt 2018. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 496–503. 1257 Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Edinburgh neural machine translation systems for wmt 16. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, volume 2, pages 371–376. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 86–96. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016c. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1715–1725. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems-Volume 2, pages 3104–3112. MIT Press. Sebastian Thrun and Lorien Pratt. 2012. Learning to learn. Springer Science & Business Media. Masao Utiyama and Hitoshi Isahara. 2007. A comparison of pivot methods for phrase-based statistical machine translation. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 484–491. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Xinyi Wang, Hieu Pham, Philip Arthur, and Graham Neubig. 2019. Multilingual neural machine translation with soft decoupled encoding. In Proceedings of the 2019 International Conference on Learning Representations (ICLR 2019). Ruochen Xu, Yiming Yang, Naoki Otani, and Yuexin Wu. 2018. Unsupervised cross-lingual transfer of word embedding spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2465–2474. Hao Zheng, Yong Cheng, and Yang Liu. 2017. Maximum expected likelihood estimation for zeroresource neural machine translation. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 4251–4257. AAAI Press. Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016), pages 1568– 1575.
2019
120
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1258–1268 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1258 Improved Zero-shot Neural Machine Translation via Ignoring Spurious Correlations Jiatao Gu* ♦, Yong Wang* †, Kyunghyun Cho ♦‡ and Victor O.K. Li † ♦Facebook AI Research †The University of Hong Kong ‡New York University, CIFAR Azrieli Global Scholar ♦{jgu, kyunghyuncho}@fb.com †{wangyong, vli}@eee.hku.hk Abstract Zero-shot translation, translating between language pairs on which a Neural Machine Translation (NMT) system has never been trained, is an emergent property when training the system in multilingual settings. However, na¨ıve training for zero-shot NMT easily fails, and is sensitive to hyper-parameter setting. The performance typically lags far behind the more conventional pivot-based approach which translates twice using a third language as a pivot. In this work, we address the degeneracy problem due to capturing spurious correlations by quantitatively analyzing the mutual information between language IDs of the source and decoded sentences. Inspired by this analysis, we propose to use two simple but effective approaches: (1) decoder pre-training; (2) backtranslation. These methods show significant improvement (4 ∼22 BLEU points) over the vanilla zero-shot translation on three challenging multilingual datasets, and achieve similar or better results than the pivot-based approach. 1 Introduction Despite the recent domination of neural networkbased models (Sutskever et al., 2014; Bahdanau et al., 2014; Vaswani et al., 2017) in the field of machine translation, which have fewer pipelined components and significantly outperform phrasebased systems (Koehn et al., 2003), Neural Machine Translation (NMT) still works poorly when the available number of training examples is limited. Research on low-resource languages is drawing increasing attention, and it has been found promising to train a multilingual NMT (Firat et al., 2016a) model for high- and row-resource languages to deal with low-resource translation (Gu et al., 2018b). As an extreme in terms of the number of supervised examples, prior works dug into * Equal contribution. translation with zero-resource (Firat et al., 2016b; Chen et al., 2017; Lample et al., 2018a,b) where the language pairs in interest do not have any parallel corpora between them. In particular, Johnson et al. (2017) observed an emergent property of zero-shot translation where a trained multilingual NMT model is able to automatically do translation on unseen language pairs; we refer to this setting as zero-shot NMT from here on. In this work, we start with a typical degeneracy issue of zero-shot NMT, reported in several recent works (Arivazhagan et al., 2018; Sestorain et al., 2018), that zero-shot NMT is sensitive to training conditions, and the translation quality usually lags behind the pivot-based methods which use a shared language as a bridge for translation (Utiyama and Isahara, 2007; Cheng et al., 2016; Chen et al., 2017). We first quantitatively show that this degeneracy issue of zeroshot NMT is a consequence of capturing spurious correlation in the data. Then, two approaches are proposed to help the model ignore such correlation: language model pre-training and backtranslation. We extensively evaluate the effectiveness of the proposed strategies on four languages from Europarl, five languages from IWSLT and four languages from MultiUN. Our experiments demonstrate that the proposed approaches significantly improve the baseline zero-shot NMT performance and outperforms the pivot-based translation in some language pairs by 2∼3 BLEU points. 2 Background Given a source sentence x = {x1, ..., xT ′}, a neural machine translation model factorizes the distribution over output sentences y = {y1, ..., yT } into a product of conditional probabilities: p(y|x; θ) = T+1 Y t=1 p(yt|y0:t−1, x1:T ′; θ), (1) 1259 where special tokens y0 (⟨bos⟩) and yT+1 (⟨eos⟩) are used to represent the beginning and the end of a target sentence. The conditional probability is parameterized using a neural network, typically, an encoder-decoder architecture based on either RNNs (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014), CNNs (Gehring et al., 2017) or the Transformers (Vaswani et al., 2017). Multilingual NMT We start with a many-tomany multilingual system similar to Johnson et al. (2017) which leverages the knowledge from translation between multiple languages. It has an identical model architecture as the single pair translation model, but translates between multiple languages. For a different notation, we use (xi, yj) where i, j ∈{0, ..., K} to represent a pair of sentences translating from a source language i to a target language j. K +1 languages are considered in total. A multilingual model is usually trained by maximizing the likelihood over training sets Di,j of all available language pairs S. That is: max θ 1 |S| · |Di,j| X (xi,yj)∈Di,j,(i,j)∈S Lj θ(xi, yj), (2) where we denote Lj θ(xi, yj) = log p(yj|xi, j; θ). Specifically, the target language ID j is given to the model so that it knows to which language it translates, and this can be readily implemented by setting the initial token y0 = j for the target sentence to start with.1 The multilingual NMT model shares a single representation space across multiple languages, which has been found to facilitate translating low-resource language pairs (Firat et al., 2016a; Lee et al., 2016; Gu et al., 2018b,c). Pivot-based NMT In practise, it is almost impossible for the training set to contain all K × (K+1) combinations of translation pairs to learn a multilingual model. Often only one (e.g. English) or a few out of the K + 1 languages have parallel sentence pairs with the remaining languages. For instance, we may only have parallel pairs between English & French, and Spanish & English, but not between French & Spanish. What happens if we evaluate on an unseen direction e.g. Spanish to French? A simple but commonly used solution is pivoting: we first translate from Spanish to English, and then from English to French 1 Based on prior works (Arivazhagan et al., 2018), both options work similarly. Without loss of generality, we use the target language ID as the initial token y0 of the decoder. with two separately trained single-pair models or a single multilingual model. However, it comes with two drawbacks: (1) at least 2× higher latency than that of a comparable direct translation model; (2) the models used in pivot-based translation are not trained taking into account the new language pair, making it difficult, especially for the second model, to cope with errors created by the first model. Zero-shot NMT Johnson et al. (2017) showed that a trained multilingual NMT system could automatically translate between unseen pairs without any direct supervision, as long as both source and target languages were included in training. In other words, a model trained for instance on English & French and Spanish & English is able to directly translate from Spanish to French. Such an emergent property of a multilingual system is called zero-shot translation. It is conjectured that zero-shot NMT is possible because the optimization encourages different languages to be encoded into a shared space so that the decoder is detached from the source languages. As an evidence, Arivazhagan et al. (2018) measured the “cosine distance” between the encoder’s pooled outputs of each sentence pair, and found that the distance decreased during the multilingual training. 3 Degeneracy Issue of Zero-shot NMT Despite the nice property of the emergent zeroshot NMT compared to other approaches such as pivot-based methods, prior works (Johnson et al., 2017; Firat et al., 2016b; Arivazhagan et al., 2018), however, have shown that the quality of zero-shot NMT significantly lags behind pivot-based translation. In this section, we investigate an underlying cause behind this particular degeneracy issue. 3.1 Zero-shot NMT is Sensitive to Training Conditions Preliminary Experiments Before drawing any conclusions, we first experimented with a variety of hyper-parameters to train multilingual systems and evaluated them on zero-shot situations, which refer to language pairs without parallel resource. We performed the preliminary experiments on Europarl2 with the following languages: English (En), French (Fr), Spanish (Es) and German (De) with no parallel sentences between any two of Fr, 2 http://www.statmt.org/europarl/ 1260 0 50 100 150 200 x1000 steps 0 10 20 30 BLEU Es-Fr (zero-shot) 0 50 100 150 200 x1000 steps 0 5 10 15 20 25 Fr-De (zero-shot) 0 50 100 150 200 x1000 steps 0 5 10 15 20 25 En-De (parallel) 0 50 100 150 200 x1000 steps 5 10 15 20 25 30 35 Es-En (parallel) Figure 1: Partial results on zero-shot and parallel directions on Europarl dataset with variant multilingual training conditions (blue: default, red: large-bs, orange: pytorch-init, green: attn-drop, purple: layerwise-attn). The dashed lines are the pivot-based or direct translation results from baseline models. Es and De. We used newstest20103 as the validation set which contains all six directions. The corpus was preprocessed with 40, 000 BPE operations across all the languages. We chose Transformer (Vaswani et al., 2017) – the state-of-the-art NMT architecture on a variety of languages – with the parameters of dmodel = 512, dhidden = 2048, nheads = 8, nlayers = 6. Multiple copies of this network were trained on data with all parallel directions for {De,Es,Fr} & En, while we varied other hyper-parameters. As the baseline, six single-pair models were trained to produce the pivot results. Results The partial results are shown in Fig. 1 including five out of many conditions on which we have tested. The default uses the exact Transformer architecture with xavier uniform (Glorot and Bengio, 2010) initialization for all layers, and is trained with lrmax = 0.005, twarmup = 4000, dropout = 0.1, nbatch = 2400 tokens/direction. For the other variants compared to the default setting, large-bs uses a bigger batch-size of 9,600; attn-drop has an additional dropout (0.1) on each attention head (Vaswani et al., 2017); we use the Pytorch’s default method4 to initialize all the weights for pytorch-init; we also try to change the conventional architecture with a layer-wise attention (Gu et al., 2018a) between the encoder and decoder, and it is denoted as layerwise-attn. All results are evaluated on the validation set using greedy decoding. From Fig. 1, we can observe that the translation quality of zero-shot NMT is highly sensitive to the hyper-parameters (e.g. layerwise-attn completely fails on zero-shot pairs) while almost all the models achieve the same level as the baseline 3 http://www.statmt.org/wmt18/translation-task.html 4We use https://pytorch.org/docs/master/ modules/torch/ nn/modules/linear.html#Linear does on parallel directions. Also, even with the stable setting (default), the translation quality of zero-shot NMT is still far below that of pivotbased translation on some pairs such as Fr-De. 3.2 Performance Degeneracy is Due to Capturing Spurious Correlation We look into this problem with some quantitative analysis by re-thinking the multilingual training in Eq. (4). For convenience, we model the decoder’s output yj as a combination of two factors: the output language ID z ∈{0, . . . , K}, and languageinvariant semantics s (see Fig. 2 for a graphical illustration.). In this work, both z and s are unobserved variables before the yj was generated. Note that z is not necessarily equal to the language id j. The best practise for zero-shot NMT is to make z and s conditionally independent given the source sentence. That is to say, z is controlled by j and s is controlled by xi. This allows us to change the target language by setting j to a desired language, and is equivalent to ignoring the correlation between xi and z. That is, the mutual information between the source language ID i and the output language ID z – I(i; z) – is 0. However, the conventional multilingual training on an imbalanced dataset makes zero-shot NMT problematic because the MLE objective will try to capture all possible correlations in the data including the spurious dependency between i and z. For instance, consider training a multilingual NMT model for Es as input only with En as the target language. Although it is undesirable for the model to capture the dependency between i (Es) and z (En), MLE does not have a mechanism to prevent it (i.e., I(i; z) > 0) from happening. In other words, we cannot explicitly control the trade off between I(i; z) and I(j; z) with MLE training. When I(i; z) increases as opposed to I(j; z), the 1261 Dec Enc yj <latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit> xi <latexit sha1_ base64="(nul)">(nul)</late xit> <latexit sha1_ base64="(nul)">(nul)</late xit> <latexit sha1_ base64="(nul)">(nul)</late xit> <latexit sha1_ base64="(nul)">(nul)</late xit> j <latexit sha1_base 64="(nul)">(nul)</latexit> <latexit sha1_base 64="(nul)">(nul)</latexit> <latexit sha1_base 64="(nul)">(nul)</latexit> <latexit sha1_base 64="(nul)">(nul)</latexit> z <latexit sha1_base64="(nul l)">(nul)</latexit> <latexit sha1_base64="(nul l)">(nul)</latexit> <latexit sha1_base64="(nul l)">(nul)</latexit> <latexit sha1_base64="(nul l)">(nul)</latexit> s <latexit sha1_base64="(nul l)">(nul)</latexit> <latexit sha1_base64="(nul l)">(nul)</latexit> <latexit sha1_base64="(nul l)">(nul)</latexit> <latexit sha1_base64="(nul l)">(nul)</latexit> Figure 2: A conceptual illustration of decoupling the output translation (yj) into two latent factors (language type and the semantics) where the undesired spurious correlation (in red) will be wrongly captured if i is always translated to j during training. decoder ignores j, which makes it impossible for the trained model to perform zero-shot NMT, as the decoder cannot output a translation in a language that was not trained before. Quantitative Analysis We performed the quantitative analysis on the estimated mutual information I(i; z) as well as the translation quality of zero-shot translation on the validation set. As an example, we show the results of large-bs setting in Fig. 3 where the I(i; z) is estimated by: I(i; z) ≈ 1 (K + 1)2 X i,j log  ˜p(z, i) ˜p(z) · ˜p(i)  , (3) where the summation is over all possible language pairs, and ˜p(·) represents frequency. The latent language identity z = φ(yj) is estimated by an external language identification tool given the actual output (Lui and Baldwin, 2012). In Fig. 3, the trend of zero-shot performance is inversely proportional to I(i; z), which indicates that the degeneracy is from the spurious correlation. The analysis of the mutual information also explains the sensitivity issue of zero-shot NMT during training. As a side effect of learning translation, I(i; z) tends to increase more when the training conditions make MT training easier (e.g. large batch-size). The performance of zero-shot NMT becomes more unstable and fails to produce translation in the desired language (j). 4 Approaches In this section, we present two existing, however, not investigated in the scenario of zero-shot NMT approaches – decoder pre-training and backtranslation – to address this degeneracy issue. 0 20 40 60 80 100 x 1000 steps 0 5 10 15 20 25 Zero-shot Average BLEU MI BLEU 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 I(X; Z) Figure 3: The learning curves of the mutual information between input and output language IDs as well as the averaged BLEU scores of all zero-shot directions on the validation sets for the large-bs setting. 4.1 Language Model Pre-training The first approach strengthens the decoder language model (LM) prior to MT training. Learning the decoder language model increases I(j; z) which facilitates zero-shot translation. Once the model captures the correct dependency that guides the model to output the desired language, it is more likely for the model to ignore the spurious correlation during standard NMT training. That is, we pre-train the decoder as a multilingual language model. Similar to Eq. (4): max θ 1 |S| · |Di,j| X (xi,yj)∈Di,j,(i,j)∈S ˜Lj θ(yj), (4) where ˜Lj θ(yj) = log p(yj|0, j; θ), which represents that pre-training can be implemented by simply replacing all the source representations by zero vectors during standard NMT training (Sennrich et al., 2016). In Transformer, it is equivalent to ignoring the attention modules between the encoder and decoder. The proposed LM pre-training can be seen as a rough approximation of marginalizing all possible source sentences, while empirically we found it worked well. After a few gradient descent steps, the pre-trained model continues with MT training. In this work, we only consider using the same parallel data for pre-training. We summarize the pros and cons as follows: Pros: Efficient (a few LM training steps + NMT training); no additional data needed; Cons: The LM pre-training objective does not necessarily align with the NMT objective. 1262 4.2 Back-Translation In order to apply language model training along with the NMT objective, we have to take the encoder into account. We use back-translation (BT, Sennrich et al., 2016), but in particular for multilingual training. Unlike the original purpose of using BT for semi-supervised learning, we utilize BT to generate synthetic parallel sentences for all zero-shot directions (Firat et al., 2016b), and train the multilingual model from scratch on the merged datasets of both real and synthetic sentences. By doing so, every language is forced to translate to all the other languages. Thus, I(i; z) is effectively close to 0 from the beginning, preventing the model from capturing the spurious correlation between i and z. Generating the synthetic corpus requires at least a reasonable starting point that translates on zeroshot pairs which can be chosen either through a pivot language (denoted as BTTP) or the current zero-shot NMT trained without BT (denoted BTZS). For instance, in previous examples, to generate synthetic pairs for Es-Fr given the training set of En-Fr, BTTP translates every En sentence to Es with a pre-trained En-Es model (used in pivot-based MT), while BTZS uses the pretrained zero-shot NMT to directly translate all Fr sentences to Es. Next, we pair the generated sentences in the reverse direction Es-Fr and merge them to the training set. The same multilingual training is applied after creating synthetic corpus for all translation pairs. Similar methods have also been explored by Firat et al. (2016b); Zheng et al. (2017); Sestorain et al. (2018), but have not been studied or used in the context of zero-shot NMT. Pros: BT explicitly avoids the spurious correlation. Also, BTZS potentially improves further by utilizing the zero-shot NMT model augmented with LM pre-training. Cons: BT is computationally more expensive as we need to create synthetic parallel corpora for all language pairs (up to O(K2)) to train a multilingual model for K languages; both the performance of BTTP and BTZS might be affected by the quality of the pre-trained models. 5 Experiments 5.1 Experimental Settings Dataset We extensively evaluate the proposed approaches (LM, BTTP, BTZS) on three mulDataset parallel pairs size/pair Europarl Es-En, De-En, Fr-En 2M Europarl-b Es-En, Fr-De 1.8M IWSLT De-En, It-En, Nl-En, Ro-En .22M IWSLT-b De-En, En-It, It-Ro, Ro-Nl .22M MultiUN Ar-En, Ru-En, Zh-En 2M Table 1: Overall dataset statistics where each pair has a similar number of examples shown in the rightmost column (we sub-sampled 2M sentences per language pair for MultiUN). All the remaining directions are used to evaluate the performance of zero-shot NMT. tilingual datasets across a variety of languages: Europarl, IWSLT5 and MultiUN.6 The detailed statistics of the training set are in Table 1, where we simulate the zero-shot settings by only allowing parallel sentences from/to English. With IWSLT, we also simulate the scenario of having a chain of pivot languages (IWSLT-b). Also, another additional dataset (Europarl-b) is included where the zero-shot pairs have neither direct nor pivot parallel sentences (similar to unsupervised translation). In such cases, we expect pivot-based methods (including the proposed BTTP) are not applicable. We use the standard validation and test sets to report the zero-shot performance. Besides, we preprocess all the datasets following the protocol used in the preliminary experiments. Training Conditions For all non-IWSLT experiments, we use the same architecture as the preliminary experiments with the training conditions of default, which is the most stable setting for zero-shot NMT in Sec. 3.1. Since IWSLT is much smaller compared to the other two datasets, we find that the same hyper-parameters except with twarmup = 8000, dropout = 0.2 works better. Models As the baseline, two pivot-based translation are considered: • PIV-S (through two single-pair NMT models trained on each pair;) • PIV-M (through a single multilingual NMT model trained on all available directions;) Moreover, we directly use the multilingual system that produce PIV-M results for the vanilla zeroshot NMT baseline. 5 https://sites.google.com/site/iwsltevaluation2017 6 http://opus.nlpl.eu/MultiUN.php 1263 As described in Sec. 4, both the LM pre-training and BT use the same datasets as that in MT training. By default, we take the checkpoint of 20, 000 steps LM pre-training to initialize the NMT model as our preliminary exploration implied that further increasing the pre-training steps would not be helpful for zero-shot NMT. For BTTP, we choose either PIV-S or PIV-M to generate the synthetic corpus based on the average BLEU scores on parallel data. On the other hand, we always select the best zero-shot model with LM pre-training for BTZS by assuming that pre-training consistently improves the translation quality of zero-shot NMT. 5.2 Model Selection for Zero-shot NMT In principle, zero-shot translation assumes we cannot access any parallel resource for the zero-shot pairs during training, including cross-validation for selecting the best model. However, according to Fig. 1, the performance of zero-shot NMT tends to drop while the parallel directions are still improving which indicates that simply selecting the best model based on the validation set of parallel directions is sub-optimal for zero-shot pairs. In this work, we propose to select the best model by maximizing the likelihood over all available validation set ˆDi,j of parallel directions together with a language model score from a fully trained language model θ′ (Eq. (4)). That is, X (xi,yj)∈ˆDi,j (i,j)∈S  Lj θ(xi, yj) + X (i,k)/∈S i̸=k ˜Lk θ′(ˆyk) K −|S|  , (5) where ˆyk is the greedy decoding output generated from the current model p(·|xi, k; θ) by forcing it to translate xi to language k that has no parallel data with i during training. The first term measures the learning progress of machine translation, and the second term shows the level of degeneracy in zero-shot NMT. Therefore, when the spurious correlation between the input and decoded languages is wrongly captured by the model, the desired language model score will decrease accordingly. 5.3 Results and Analysis Overall Performance Comparison We show the translation quality of zero-shot NMT on the three datasets in Table 2. All the results (including pivot-based approaches) are generated using beam-search with beam size = 4 and length 0 20 40 60 80 100 x1000 steps 0 5 10 15 20 25 BLEU Model ZS ZS+LM ZS+BTZS Condition default large-bs Figure 4: Learning curves of the two proposed approaches (LM, BTZS) and the vanilla ZS on Europarl Fr→De with two conditions (default, large-bs). The red dashed line is the pivot-based baseline. penalty α = 0.6 (Vaswani et al., 2017). Experimental results in Table 2 demonstrate that both our proposed approaches achieve significant improvement in zero-shot translation for both directions in all the language pairs. Only with LM pretraining, the zero-shot NMT has already closed the gap between the performance and that of the strong pivot-based baseline for datasets. For pairs which are lexically more similar compared to the pivot language (e.g. Es-Fr v.s. En), ZS+LM achieved much better performance than its pivotbased counterpart. Depending on which languages we consider, zero-shot NMT with the help of BTTP & BTZS can achieve a significant improvement around 4 ∼22 BLEU points compared to the na¨ıve approach. For a fair comparison, we also re-implement the alignment method proposed by Arivazhagan et al. (2018) based on cosine distance and the results are shown as ZS+Align in Table. 2, which is on average 1.5 BLEU points lower than our proposed ZS+LM approach indicating that our approaches might fix the degeneracy issue better. As a reference of upper bound, we also include the results with a fully supervised setting, where all the language pairs are provided for training. Table 2 shows that the proposed BTTP & BTZS are competitive and even very close to this upper bound, and BTZS is often slightly better than BTTP across different languages. No Pivots We conduct experiments on the setting without available pivot languages. Shown in Table 2(b), our training sets only contain Es-En and De-Fr. Then if we want to translate from De to Fr, pivot-based methods will not work. However, we can still perform zero-shot NMT by simply training a multilingual model on the 1264 Europarl (a) De, Es, Fr ↔En (b) Es ↔En, Fr ↔De Model De-Es De-Fr Es-Fr Zero Parallel Es-Fr De-En ← → ← → ← → Avg Avg ← → ← → PIV-S 26.2 31.2 25.9 32.2 35.7 38.0 31.5 35.0 – not applicable – PIV-M 26.2 31.1 25.2 31.5 35.4 37.1 31.1 34.4 – not applicable – ZS 22.1 30.2 21.7 29.6 36.2 36.7 29.4 34.4 29.5 27.5 14.3 23.7 ZS+Align (2018) 24.7 31.4 23.8 31.0 37.3 38.5 31.1 34.5 – – – – ZS+LM 25.9 32.8 25.5 32.3 39.3 40.0 32.6 34.6 34.9 37.1 21.5 30.0 ZS+BTTP 27.1 33.0 26.4 33.0 39.1 40.0 33.1 33.9 – not applicable – ZS+BTZS 26.7 33.2 25.9 33.1 40.0 41.4 33.4 34.7 39.7 40.5 25.1 30.6 Full 28.5 34.1 27.9 34.2 40.0 42.0 34.4 34.8 40.0 42.0 27.0 33.4 IWSLT (c) De, It, Nl, Ro ↔En Model De-It De-Nl De-Ro It-Nl It-Ro Nl-Ro Zero Parallel ← → ← → ← → ← → ← → ← → Avg Avg PIV-S 16.7 16.3 19.1 17.7 17.5 15.0 18.4 18.6 18.8 17.2 18.3 17.0 17.6 29.8 PIV-M 21.4 21.6 24.0 23.7 22.3 20.0 22.7 22.4 23.6 21.3 23.0 21.1 22.3 35.0 ZS 14.8 17.2 16.7 17.8 14.9 16.6 18.4 16.1 19.7 17.8 16.2 17.5 17.0 35.0 ZS+LM 21.3 20.9 24.7 24.1 22.3 19.8 22.2 22.3 23.2 22.1 23.0 21.6 22.3 34.9 ZS+BTTP 23.3 23.3 26.5 25.8 23.9 22.1 24.6 24.3 25.9 23.7 24.7 23.7 24.3 35.2 ZS+BTZS 22.6 23.3 27.2 26.5 23.6 21.8 24.3 24.0 25.7 23.6 25.4 23.3 24.3 35.5 Full 23.9 23.9 27.0 26.1 24.8 22.7 25.6 24.6 25.9 24.2 25.1 23.9 24.8 35.7 IWSLT (d) De ↔En ↔It ↔Ro ↔Nl MultiUN (e) Ar, Ru, Zh ↔En Model De-It De-Nl Model Ar-Ru Ar-Zh Ru-Zh Zero Parallel ← → ← → ← → ← → ← → Avg Avg PIV-S 16.7 16.3 – – PIV-S 31.4 33.5 31.2 50.4 31.2 48.0 37.6 48.4 PIV-M 22.7 22.0 18.8 18.3 PIV-M 28.4 29.9 27.7 45.7 27.2 44.2 33.8 44.5 ZS 21.3 21.0 23.9 24.0 ZS 15.6 12.7 16.7 17.0 12.8 14.9 15.0 44.5 ZS+LM 22.2 22.2 25.0 24.6 ZS+LM 28.0 21.5 27.3 43.8 19.9 43.3 30.6 45.8 ZS+BTTP – – – – ZS+BTTP 31.0 31.7 30.1 48.2 29.9 46.4 36.2 45.7 ZS+BTZS 22.9 22.9 26.8 26.2 ZS+BTZS 31.4 33.1 31.1 49.4 30.8 46.8 37.1 47.4 Full 23.9 23.9 27.0 26.1 Full 31.7 32.5 30.8 49.1 29.5 47.2 36.8 45.6 Table 2: Overall BLEU scores including parallel and zero-shot directions on the test sets of three multilingual datasets. In (a) (c) (e), En is used as the pivot-language; no language is available as the pivot for (b); we also present partial results in (d) where a chain of pivot languages are used. For all columns, the highest two scores are marked in bold for all models except for the fully-supervised “upper bound”. merged dataset. As shown in Table 2(a) and (b), although the setting of no pivot pairs performs slightly worse than that with pivot languages, both our approaches (LM, BTZS) substantially improve the vanilla model and achieve competitive performance compared to the fully supervised setting. A Chain of Pivots We analyze the case where two languages are connected by a chain of pivot languages. As shown in Table 1(IWSLT-b), we used IWSLT which contains pairs for De-En, EnIt, It-Ro, Ro-Nl. If we translate from De to Nl with pivot-based translation, pivoting from a chain of languages (De-En-It-Ro-Nl) is required, which suffers from computational inefficiency and error accumulation. In such cases, however, zero-shot NMT is able to directly translate between any two languages. Table 2(d) shows that the performance of pivot-based methods dramatically degrades as the length of the chain increases, while ZS does not have this degradation and still achieves large gains compared to the pivot-based translation. Robustness Analysis From Fig. 4, we show the learning curves of zero-shot NMT with and without our proposed methods. Both the models with LM pre-training and BTZS show robustness in two conditions and achieve competitive and even better results than the pivot-based translation, while the vanilla model is unstable and completely fails 1265 менее значительные изменения в инде@@ кс@@ е развития человеческого потенциала еще больше сни@@ з@@ или довер@@ ие к нему и веду@@ щую роль , которую могли играть доклады о развитии человеческого потенциала в оценке уровня развития человека . Ոᔄ ݎ઀ ೰හ ጱ ੜ@@ ଏ ץᦈ ᬰӞྍ ഖਸ਼ ԧ Ոᔄ ݎ઀ ಸޞ ੒ ᤍᰁ Ոᔄ ݎ઀ ጱ מ@@ ᥿@@ ଶ ޾ ᶾ੕@@ ێ ̶ ńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńńń ࣁࣁࣁࣁࣁࣁࣁࣁࣁࣁࣁጱጱጱጱ҅ࣁࣁࣁࣁࣁࣁࣁࣁࣁࣁࣁࣁࣁࣁࣁጱጱጱጱጱ̶ ࣁᬯොᶎ҅౯ժ୩᧣ԧࣁݎ઀ᓉᩒᳯ᷌ࢵᴬտᦓӤݐ஑ጱᬰ઀҅ଚ୩᧣ԧࣁݎ઀ᓉᩒᳯ᷌ࢵᴬտᦓӤݐ஑ጱ ᬰ઀҅۱ೡࣁݎ઀ᓉᩒᳯ᷌Ӥݐ஑ԧᬰ઀̶ ࣁՈᔄݎ઀೰හӾ҅ๅ᯿ᥝጱݎ઀ԞๅےӸ᯿҅Ԟๅےᚈ୧҅Ԟฎࣁᦧ֌Ոᔄݎ઀ጱఘ٭ӥ҅ݢݎഀ᯿ᥝ ֢አ̶ ࣁݎ઀ӾጱՈᔄݎ઀ොᶎ҅ݢզݎഀๅय़ጱ֢አ҅ଚֵٌᚆड़ࣁٌಸޞӾݎഀๅय़ጱ֢አ҅ଚֵٌᚆड़ࣁ ݎ઀Ӿጱᚆێୌᦡ̶ Ոᔄݎ઀LQGH[ጱ᯿य़ݒ۸ԞᬰӞྍᴳ֗ԧ੒Ոᔄݎ઀ጱᚆێ҅ଚݎഀԧ᯿ᥝ֢አ҅ݢࣁՈᔄݎ઀ᦧ֌ොᶎ൉ ׀KXPDQGHYHORSPHQWಸޞ̶ Ոᔄݎ઀೰හጱፘ੒᫾ੜጱݒ۸ᬰӞྍٺ੝ԧਙጱמஞ҅ଚֵਙᚆड़ݎഀՈᔄݎ઀ಸޞጱԆ##੕##֢አ̶ OHVVVLJQLƉFDQWFKDQJHVLQWKHKXPDQGHYHORSPHQWLQGH[KDYHIXUWKHUUHGXFHGLWVFUHGLELOLW\DQGOHDGHUVKLSUROHLQKXPDQ GHYHORSPHQWDVVHVVPHQWUHSRUWV Ոᔄݎ઀೰හጱ᫾ੜݒ۸ᬰӞྍٺ੝ԧՈժ੒Ոᔄݎ઀ጱמձ҅ଚݎഀԧ੒Ոᔄݎ઀࿜ଘጱᦧհ֢አ̶ SOURCE TARGET ZS ZS+LM ZS ZS+LM ZS ZS+LM ZS ZS+LM step 1000 step 15000 step 31000 step 62000 step 1000 step 15000 step 31000 step 62000 Figure 5: Zero-shot translation performance on Ru →Zh from MultiUN dataset. (↑) An example randomly selected from the validation set, is translated by both the vanilla zero-shot NMT and that with LM pre-training at four checkpoints. Translation in an incorrect language (English) is marked in pink color. (←) We showed the two learning curves for the averaged zero-shot BLEU scores on validation set of Multi-UN with the corresponded checkpoints marked. after a small number of iterations on large-bs. Case Study We also show a randomly selected example for Ru →Zh from the validation set of MultiUN dataset in Fig. 5. We can see that at the beginning, the output sentence of ZS+LM is fluent while ZS learns translation faster than ZS+LM. Then, En tokens starts to appear in the output sentence of ZS, and it totally shifts to En eventually. 6 Related Works Zero-shot Neural Machine Translation Zeroshot NMT has received increasingly more interest in recent years. Platanios et al. (2018) introduced the contextual parameter generator, which generated the parameters of the system and performed zero-shot translation. Arivazhagan et al. (2018) conjectured the solution towards the degeneracy in zero-shot NMT was to guide an NMT encoder to learn language agnostic representations. Sestorain et al. (2018) combined dual learning to improve zero-shot NMT. However, unlike our work, none of these prior works performed quantitative investigation of the underlying cause. Zero Resource Translation This work is also closely related to zero-resource translation which is a general task to translate between languages without parallel resources. Possible solutions include pivot-based translation, multilingual or unsupervised NMT. For instance, there have been attempts to train a single-pair model with a pivotlanguage (Cheng et al., 2016; Chen et al., 2017) or a pivot-image (Lee et al., 2017; Chen et al., 2018). Unsupervised Translation Unlike the focus of this work, unsupervised translation usually refers to a zero-resource problem where many monolingual corpora are available. Lample et al. (2018a); Artetxe et al. (2018) proposed to enforce a shared 1266 latent space to improve unsupervised translation quality which was shown not necessary by Lample et al. (2018b) in which a more effective initialization method for related languages was proposed. Neural Machine Translation Pre-training As a standard transfer learning approach, pre-training significantly improves the translation quality of low resource languages by fine-tuning the parameters trained on high-resource languages (Zoph et al., 2016; Gu et al., 2018c; Lample and Conneau, 2019). Our proposed LM pre-training can also be included in the same scope while following a different motivation. 7 Conclusion In this paper, we analyzed the issue of zero-shot translation quantitatively and successfully close the gap of the performance of between zero-shot translation and pivot-based zero-resource translation. We proposed two simple and effective strategies for zero-shot translation. Experiments on the Europarl, IWSLT and MultiUN corpora show that our proposed methods significantly improve the vanilla zero-shot NMT and consistently outperform the pivot-based methods. Acknowledgement This research was supported in part by the Facebook Low Resource Neural Machine Translation Award. This work was also partly supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Electronics (Improving Deep Learning using Latent Structure). KC thanks support by eBay, TenCent, NVIDIA and CIFAR. References Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Roee Aharoni, Melvin Johnson, and Wolfgang Macherey. 2018. The missing ingredient in zero-shot neural machine translation. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In Proceedings of International Conference on Learning Representations (ICLR), Vancouver, Canada. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Yun Chen, Yang Liu, Yong Cheng, and Victor OK Li. 2017. A teacher-student framework for zeroresource neural machine translation. arXiv preprint arXiv:1705.00753. Yun Chen, Yang Liu, and Victor OK Li. 2018. Zeroresource neural machine translation with multiagent communication game. arXiv preprint arXiv:1802.03116. Yong Cheng, Yang Liu, Qian Yang, Maosong Sun, and Wei Xu. 2016. Neural machine translation with pivot languages. arXiv preprint arXiv:1611.04928. Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–Decoder approaches. In Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016a. Multi-way, multilingual neural machine translation with a shared attention mechanism. In NAACL. Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T Yarman Vural, and Kyunghyun Cho. 2016b. Zero-resource translation with multi-lingual neural machine translation. In EMNLP. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of International Conference on Machine Learning (ICML). Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. Jiatao Gu, James Bradbury, Caiming Xiong, Victor O. K. Li, and Richard Socher. 2018a. Nonautoregressive neural machine translation. ICLR. Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor OK Li. 2018b. Universal neural machine translation for extremely low resource languages. arXiv preprint arXiv:1802.05368. Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, and Victor OK Li. 2018c. Meta-learning for lowresource neural machine translation. arXiv preprint arXiv:1808.08437. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. 1267 Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, pages 48–54. Association for Computational Linguistics. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291. Guillaume Lample, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In Proceedings of International Conference on Learning Representations (ICLR), Vancouver, Canada. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049. Association for Computational Linguistics. Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2016. Fully character-level neural machine translation without explicit segmentation. arXiv preprint arXiv:1610.03017. Jason Lee, Kyunghyun Cho, Jason Weston, and Douwe Kiela. 2017. Emergent translation in multi-agent communication. arXiv preprint arXiv:1710.06922. Marco Lui and Timothy Baldwin. 2012. langid. py: An off-the-shelf language identification tool. In Proceedings of the ACL 2012 system demonstrations, pages 25–30. Association for Computational Linguistics. Emmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, and Tom Mitchell. 2018. Contextual parameter generation for universal neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 425–435. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation systems for wmt 16. arXiv preprint arXiv:1606.02891. Lierni Sestorain, Massimiliano Ciaramita, Christian Buck, and Thomas Hofmann. 2018. Zeroshot dual machine translation. arXiv preprint arXiv:1805.10338. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. NIPS. Masao Utiyama and Hitoshi Isahara. 2007. A comparison of pivot methods for phrase-based statistical machine translation. In Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 484–491. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS). Hao Zheng, Yong Cheng, and Yang Liu. 2017. Maximum expected likelihood estimation for zeroresource neural machine translation. IJCAI. Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568–1575. Association for Computational Linguistics. A Additional Experiments A.1 Trade-off between decoding speed and translation quality In Table. 3, we empirically tested the decoding speed by using either pivot-based methods or zeroshot NMT. The overhead of switching models in pivot-based translation has been ignored. All the speed are measured as “ms/sentence” and tested in parallel on 8 V100 GPUs using beam-search with a beam size 4. Model BLEU Speed PIV-S (greedy) 31.1 8.3 PIV-M (greedy) 30.6 8.3 PIV-S 31.5 13.3 PIV-M 31.1 13.3 ZS 29.4 6.6 ZS+LM 32.6 6.6 ZS+BTTP 33.1 6.6 ZS+BTZS 33.4 6.6 Table 3: Decoding speed and the translation quality (average BLEU scores) of the zero-shot pairs on Europarl dataset. Vanilla zero-shot NMT is faster but performs worse than pivot-based methods. There exists a trade-off between the decoding speed and the translation quality where we also present a fast pivoting method where we found that using greedy-decoding for the pivot language only affects the translation quality by a small margin. 1268 However, both our proposed approaches significantly improve the zero-shot NMT and outperforms the pivot-based translation with shorter decoding time, making such trade-off meaningless. A.2 Effect of Using Multi-way Data Prior research (Cheng et al., 2016) also reported that the original Europarl dataset contains a large proportion of multi-way translations. To investigate the affects, we followed the same process in (Cheng et al., 2016; Chen et al., 2017) to exclude all multi-way translation sentences, which means there are no overlaps in pairwise language pairs. The statistics of this modified dataset (Europarlc) compared to the original Europarl dataset are shown in Table 4. Although we observed a performance drop by using data without multi-way sentences, the results in Table 5 show that the proposed LM pre-training is not affected by obtaining multi-way data and consistently improves the vanilla zero-shot NMT. We conjecture that the performance drop is mainly because of the size of the dataset. Also our methods can easily beat (Chen et al., 2017) with large margins. Dataset parallel pairs size/pair Europarl Es-En, De-En, Fr-En 2M Europarl-c Es-En, De-En, Fr-En .8M Table 4: Europarl denotes multi-way dataset; Europarlc denotes non multi-way dataset. Model Es→Fr De→Fr Yes No Yes No PIV-S 37.95 32.98 32.20 27.94 PIV-M 37.15 35.08 31.46 29.78 ZS 36.69 33.22 29.59 26.91 ZS + LM 40.04 37.22 33.24 30.45 Chen et al. (2017) − 33.86 − 27.03 Table 5: Effects of multi-way data on Europarl. “Yes” means with multi-way translation, and “No” means the opposite.
2019
121
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1269–1281 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1269 Syntactically Supervised Transformers for Faster Neural Machine Translation Nader Akoury, Kalpesh Krishna, Mohit Iyyer College of Information and Computer Sciences University of Massachusetts Amherst {nsa,kalpesh,miyyer}@cs.umass.edu Abstract Standard decoders for neural machine translation autoregressively generate a single target token per time step, which slows inference especially for long outputs. While architectural advances such as the Transformer fully parallelize the decoder computations at training time, inference still proceeds sequentially. Recent developments in non- and semiautoregressive decoding produce multiple tokens per time step independently of the others, which improves inference speed but deteriorates translation quality. In this work, we propose the syntactically supervised Transformer (SynST), which first autoregressively predicts a chunked parse tree before generating all of the target tokens in one shot conditioned on the predicted parse. A series of controlled experiments demonstrates that SynST decodes sentences ∼5× faster than the baseline autoregressive Transformer while achieving higher BLEU scores than most competing methods on En-De and En-Fr datasets. 1 Introduction Most models for neural machine translation (NMT) rely on autoregressive decoders, which predict each token ti in the target language one by one conditioned on all previously-generated target tokens t1···i−1 and the source sentence s. For downstream applications of NMT that prioritize low latency (e.g., real-time translation), autoregressive decoding proves expensive, as decoding time in stateof-the-art attentional models such as the Transformer (Vaswani et al., 2017) scales quadratically with the number of target tokens. In order to speed up test-time translation, nonautoregressive decoding methods produce all target tokens at once independently of each other (Gu et al., 2018; Lee et al., 2018), while semiautoregressive decoding (Wang et al., 2018; Stern Cats > sleep > a > lot Katzen schlafen viel Cats sleep a lot Cats sleep > a lot > > > Cats sleep a lot NP1 > VP3 > Cats sleep a lot autoregressive non-autoregressive semi-autoregressive Latent Transformer SynST (ours) Figure 1: Comparison of different methods designed to increase decoding speed. The arrow > indicates the beginning of a new decode step conditioned on everything that came previously. The latent Transformer produces a sequence of discrete latent variables, whereas SynST produces a sequence of syntactic constituent identifiers. et al., 2018) trades off speed for quality by reducing (but not completely eliminating) the number of sequential computations in the decoder (Figure 1). We choose the latent Transformer (LT) of Kaiser et al. (2018) as a starting point, which merges both of these approaches by autoregressively generating a short sequence of discrete latent variables before non-autoregressively producing all target tokens conditioned on the generated latent sequence. Kaiser et al. (2018) experiment with increasingly complex ways of learning their discrete latent space, some of which obtain small BLEU improvements over a purely non-autoregressive baseline with similar decoding speedups. In this work, we propose to syntactically supervise the latent space, which results in a simpler model that produces better and faster translations.1 Our model, the syntactically supervised Transformer (SynST, Section 3), first autoregressively predicts a sequence of target syntactic chunks, and then non-autoregressively 1Source code to reproduce our results is available at https://github.com/dojoteef/synst 1270 generates all of the target tokens conditioned on the predicted chunk sequence. During training, the chunks are derived from the output of an external constituency parser. We propose a simple algorithm on top of these parses that allows us to control the average chunk size, which in turn limits the number of autoregressive decoding steps we have to perform. SynST improves on the published LT results for WMT 2014 En→De in terms of both BLEU (20.7 vs. 19.8) and decoding speed (4.9× speedup vs. 3.9×). While we replicate the setup of Kaiser et al. (2018) to the best of our ability, other work in this area does not adhere to the same set of datasets, base models, or “training tricks”, so a legitimate comparison with published results is difficult. For a more rigorous comparison, we re-implement another related model within our framework, the semiautoregressive transformer (SAT) of Wang et al. (2018), and observe improvements in BLEU and decoding speed on both En↔De and En→Fr language pairs (Section 4). While we build on a rich line of work that integrates syntax into both NMT (Aharoni and Goldberg, 2017; Eriguchi et al., 2017) and other language processing tasks (Strubell et al., 2018; Swayamdipta et al., 2018), we aim to use syntax to speed up decoding, not improve downstream performance (i.e., translation quality). An in-depth analysis (Section 5) reveals that syntax is a powerful abstraction for non-autoregressive translation: for example, removing information about the constituent type of each chunk results in a drop of 15 BLEU on IWSLT En→De. 2 Decoding in Transformers Our work extends the Transformer architecture (Vaswani et al., 2017), which is an instance of the encoder-decoder framework for language generation that uses stacked layers of self-attention to both encode a source sentence and decode the corresponding target sequence. In this section, we briefly review2 the essential components of the Transformer architecture before stepping through the decoding process in both the vanilla autoregressive Transformer and non- and semi-autoregressive extensions of the model. 2We omit several architectural details in our overview, which can be found in full in Vaswani et al. (2017). 2.1 Transformers for NMT The Transformer encoder takes a sequence of source word embeddings s1,··· ,sn as input and passes it through multiple blocks of self-attention and feed-forward layers to finally produce contextualized token representations e1,··· ,en. Unlike recurrent architectures (Hochreiter and Schmidhuber, 1997; Bahdanau et al., 2014), the computation of en does not depend on en−1, which enables full parallelization of the encoder’s computations at both training and inference. To retain information about the order of the input words, the Transformer also includes positional encodings, which are added to the source word embeddings. The decoder of the Transformer operates very similarly to the encoder during training: it takes a shifted sequence of target word embeddings t1,··· ,tm as input and produces contextualized token representations d1,··· ,dm, from which the target tokens are predicted by a softmax layer. Unlike the encoder, each block of the decoder also performs source attention over the representations e1...n produced by the encoder. Another difference during training time is target-side masking: at position i, the decoder’s self attention should not be able to look at the representations of later positions i + 1, . . . , i + m, as otherwise predicting the next token becomes trivial. To impose this constraint, the self-attention can be masked by using a lower triangular matrix with ones below and along the diagonal. 2.2 Autoregressive decoding While at training time, the decoder’s computations can be parallelized using masked self-attention on the ground-truth target word embeddings, inference still proceeds token-by-token. Formally, the vanilla Transformer factorizes the probability of target tokens t1,··· ,tm conditioned on the source sentence s into a product of token-level conditional probabilities using the chain rule, p(t1,··· ,tm|s) = m Y i=1 p(ti|t1,··· ,ti−1, s). During inference, computing arg maxt p(t|s) is intractable, which necessitates the use of approximate algorithms such as beam search. Decoding requires a separate decode step to generate each target token ti; as each decode step involves a full pass through every block of the decoder, autoregressive decoding becomes time-consuming especially 1271 Feed Forward Add & Norm Self-Attention Add & Norm Self-Attention Add & Norm N⨉ N⨉ Source Attention Add & Norm Self-Attention Add & Norm M⨉ Source Attention Add & Norm Autoregressive Parse Decoder Single-Pass Token Decoder Encoder Feed Forward Add & Norm Feed Forward Add & Norm Positional Encoding Positional Encoding Positional Encoding Input Embedding Output Embedding Parse Embedding t4 NP1 <MASK> VP3 <MASK> <MASK> <MASK> Katzen schlafen viel <SOS> NP1 VP3 t2 t1 t3 Linear Softmax NP1 VP3 <EOS> Linear Softmax NP1 Cats VP3 sleep a lot Figure 2: A high-level overview of the SynST architecture. During training, the parse decoder learns to autoregressively predict all chunk identifiers in parallel (time steps t1,2,3), while the token decoder conditions on the “ground truth” chunk identifiers to predict the target tokens in one shot (time step t4). During inference (shown here), the token decoder conditions on the autoregressively predicted chunk identifiers. The encoder and token decoder contain N ≥1 layers, while the parse decoder only requires M = 1 layers (see Table 4). for longer target sequences in which there are more tokens to attend to at every block. 2.3 Generating multiple tokens per time step As decoding time is a function of the number of decoding time steps (and consequently the number of passes through the decoder), faster inference can be obtained using methods that reduce the number of time steps. In autoregressive decoding, the number of time steps is equal to the target sentence length m; the most extreme alternative is (naturally) non-autoregressive decoding, which requires just a single time step by factorizing the target sequence probability as p(t1,··· ,tm|s) = m Y i=1 p(ti|s). Here, all target tokens are produced independently of each other. While this formulation does indeed provide significant decoding speedups, translation quality suffers after dropping the dependencies between target tokens without additional expensive reranking steps (Gu et al., 2018, NAT) or iterative refinement with multiple decoders (Lee et al., 2018). As fully non-autoregressive decoding results in poor translation quality, another class of methods produce k tokens at a single time step where 1 < k < m. The semi-autoregressive Transformer (SAT) of Wang et al. (2018) produces a fixed k tokens per time step, thus modifying the target sequence probability to: p(t1,··· ,tm|s) = |G| Y i=1 p(Gt|G1,··· ,Gt−1, x), where each of G1,··· ,G⌊m−1 k ⌋+1 is a group of contiguous non-overlapping target tokens of the form ti,··· ,ti+k. In conjunction with training techniques like knowledge distillation (Kim and Rush, 2016) and initialization with an autoregressive model, SATs maintain better translation quality than non-autoregressive approaches with competitive speedups. Stern et al. (2018) follow a similar approach but dynamically select a different k at each step, which results in further quality improvements with a corresponding decrease in speed. 2.4 Latent Transformer While current semi-autoregressive methods achieve both better quality and faster speedups than their non-autoregressive counterparts, largely due to the number of tricks required to train the latter, the 1272 theoretical speedup for non-autoregressive models is of course larger. The latent Transformer (Kaiser et al., 2018, LT) is similar to both of these lines of work: its decoder first autoregressively generates a sequence of discrete latent variables l1,··· ,lj and then non-autoregressively produces the entire target sentence ti,··· ,tm conditioned on the latent sequence. Two parameters control the magnitude of the speedup in this framework: the length of the latent sequence (j), and the size of the discrete latent space (K). The LT is significantly more difficult to train than any of the previously-discussed models, as it requires passing the target sequence through what Kaiser et al. (2018) term a discretization bottleneck that must also maintain differentiability through the decoder. While LT outperforms the NAT variant of non-autoregressive decoding in terms of BLEU, it takes longer to decode. In the next section, we describe how we use syntax to address the following three weaknesses of LT: 1. generating the same number of latent variables j regardless of the length of the source sentence, which hampers output quality 2. relying on a large value of K (the authors report that in the base configuration as few as ∼3000 latents are used out of 216 available), which hurts translation speed 3. the complexity of implementation and optimization of the discretization bottleneck, which negatively impacts both quality and speed. 3 Syntactically Supervised Transformers Our key insight is that we can use syntactic information as a proxy to the learned discrete latent space of the LT. Specifically, instead of producing a sequence of latent discrete variables, our model produces a sequence of phrasal chunks derived from a constituency parser. During training, the chunk sequence prediction task is supervised, which removes the need for a complicated discretization bottleneck and a fixed sequence length j. Additionally, our chunk vocabulary is much smaller than that of the LT, which improves decoding speed. Our model, the syntactically supervised Transformer (SynST), follows the two-stage decoding setup of the LT. First, an autoregressive decoder generates the phrasal chunk sequence, and then all of the target tokens are generated at once, condiS6 DT1 VP3 NP3 JJ1 NN1 VBD1 NP2 PRP$1 NNS1 the sleepy cat closed its eyes k=3: NP3 VP3 k=2: DT1 JJ1 NN1 VBD1 NP2 Figure 3: Example of our parse chunk algorithm with max span sizes k = 2, 3. At each visited node during an in-order traversal of the parse, if the subtree size is less than or equal to k, we append a corresponding chunk identifier to our sequence. tioned on the chunks (Figure 2). The rest of this section fully specifies each of these two stages. 3.1 Autoregressive chunk decoding Intuitively, our model uses syntax as a scaffold for the generated target sentence. During training, we acquire supervision for the syntactic prediction task through an external parser in the target language. While we could simply force the model to predict the entire linearized parse minus the terminals,3 this approach would dramatically increase the number of autoregressive steps, which we want to keep at a minimum to prioritize speed. To balance syntactic expressivity with the number of decoding time steps, we apply a simple chunking algorithm to the constituency parse. Extracting chunk sequences: Similar to the SAT method, we first choose a maximum chunk size k. Then, for every target sentence in the training data, we perform an in-order traversal of its constituency parse tree. At each visited node, if the number of leaves spanned by that node is less than or equal to k, we append a descriptive chunk identifier to the parse sequence before moving onto its sibling; otherwise, we proceed to the left child and try again. This process is shown for two different values of k on the same sentence in Figure 3. Each unique chunk identifier, which is formed by the concatenation of the constituent type and subtree size (e.g., NP3), is considered as an element of our first decoder’s vocabulary; thus, the maximum size of this vocabulary is |P| × k where P is the 3This approach is used for paraphrase generation by Iyyer et al. (2018), who were not focused on decoding speed. 1273 set of all unique constituent types.4 Both parts of the chunk identifier (the constituent type and its size) are crucial to the performance of SynST, as demonstrated by the ablations in Section 5. Predicting chunk sequences: Because we are fully supervising the chunk sequence prediction, both the encoder and parse decoder are architecturally identical to the encoder and decoder of the vanilla Transformer, respectively. The parse decoder differs in its target vocabulary, which is made up of chunk identifiers instead of word types, and in the number of layers (we use 1 layer instead of 6, as we observe diminishing returns from bigger parse decoders as shown in Section 5). Formally, the parse decoder autoregressively predicts a sequence of chunk identifiers c1,··· ,cp conditioned on the source sentence s5 by modeling p(c1,··· ,cp|s) = p Y i=1 p(ci|c1,··· ,ci−1, s). Unlike LT, the length p of the chunk sequence changes dynamically based on the length of the target sentence, which is reminiscent of the token decoding process in the SAT. 3.2 Non-autoregressive token decoding In the second phase of decoding, we apply a single non-autoregressive step to produce the tokens of the target sentence by factorizing the target sequence probability as p(t1,··· ,tm|s) = m Y i=1 p(ti|c1,··· ,cp, s). Here, all target tokens are produced independently of each other, but in contrast to the previouslydescribed non-autoregressive models, we additionally condition each prediction on the entire chunk sequence. To implement this decoding step, we feed a chunk sequence as input to a second Transformer decoder, whose parameters are separate from those of the parse decoder. During training, we use the ground-truth chunk sequence as input, while at inference we use the predicted chunks. 4In practice, this vocabulary is significantly smaller than the discrete latent space of the LT for reasonable values of k. 5In preliminary experiments, we also tried conditioning this decoder on the source parse, but we did not notice significant differences in translation quality. Implementation details: To ensure that the number of input and output tokens in the second decoder are equal, which is a requirement of the Transformer decoder, we add placeholder <MASK> tokens to the chunk sequence, using the size component of each chunk identifier to determine where to place these tokens. For example, if the first decoder produces the chunk sequence NP2 PP3, our second decoder’s input becomes NP2 <MASK> <MASK> PP3 <MASK> <MASK> <MASK>; this formulation also allows us to better leverage the Transformer’s positional encodings. Then, we apply unmasked self-attention over this input sequence and predict target language tokens at each position associated with a <MASK> token. 4 Experiments We evaluate the translation quality (in terms of BLEU) and the decoding speedup (average time to decode a sentence) of SynST compared to competing approaches. In a controlled series of experiments on four different datasets (En↔De and En→ Fr language pairs),6 we find that SynST achieves a strong balance between quality and speed, consistently outperforming the semi-autoregressive SAT on all datasets and the similar LT on the only translation dataset for which Kaiser et al. (2018) report results. In this section, we first describe our experimental setup and its differences to those of previous work before providing a summary of the key results. 4.1 Controlled experiments Existing papers in non- and semi-autoregressive approaches do not adhere to a standard set of datasets, base model architectures, training tricks, or even evaluation scripts. This unfortunate disparity in evaluation setups means that numbers between different papers are uncomparable, making it difficult for practitioners to decide which method to choose. In an effort to offer a more meaningful comparison, we strive to keep our experimental conditions as close to those of Kaiser et al. (2018) as possible, as the LT is the most similar existing model to ours. In doing so, we made the following decisions: • Our base model is the base vanilla Transformer (Vaswani et al., 2017) without any ar6We explored translating to other languages previously evaluated in the non- and semi-autoregressive decoding literature, but could not find publicly-available, reliable constituency parsers for them. 1274 Model WMT En-De WMT De-En IWSLT En-De WMT En-Fr BLEU Speedup BLEU Speedup BLEU Speedup BLEU Speedup Baseline (b = 1) 25.82 1.15× 29.83 1.14× 28.66 1.16× 39.41 1.18× Baseline (b = 4) 26.87 1.00× 30.73 1.00× 30.00 1.00× 40.22 1.00× SAT (k = 2) 22.81 2.05× 26.78 2.04× 25.48 2.03× 36.62 2.14× SAT (k = 4) 16.44 3.61× 21.27 3.58× 20.25 3.45× 28.07 3.34× SAT (k = 6) 12.55 4.86× 15.23 4.27× 14.02 4.39× 24.63 4.77× LT* 19.8 3.89× SynST(k = 6) 20.74 4.86× 25.50 5.06× 23.82 3.78× 33.47 5.32× Table 1: Controlled experiments comparing SynST to a baseline Transformer, SAT, and LT on four different datasets (two language pairs) demonstrate speed and BLEU improvements. Wall-clock speedup is measured on a single Nvidia TitanX Pascal by computing the average time taken to decode a single sentence in the dev/test set, averaged over five runs. When beam width b is not specified, we perform greedy decoding (i.e., b = 1). Note that the LT results are reported by Kaiser et al. (2018) and not from our own implementation;9 as such, they are not directly comparable to the other results. chitectural upgrades.7 • We use all of the hyperparameter values from the original Transformer paper and do not attempt to tune them further, except for: (1) the number of layers in the parse decoder, (2) the decoders do not use label smoothing. • We do not use sequence-level knowledge distillation, which augments the training data with translations produced by an external autoregressive model. The choice of model used for distillation plays a part in the final BLEU score, so we remove this variable. • We report all our BLEU numbers using sacreBLEU (Post, 2018) to ensure comparability with future work.8 • We report wall-clock speedups by measuring the average time to decode one sentence (batch size of one) in the dev/test set. As the code for LT is not readily available9, we also reimplement the SAT model using our setup, as it is the most similar model outside of LT to our own.10 For SynST, we set the maximum chunk size 7As the popular Tensor2Tensor implementation is constantly being tweaked, we instead re-implement the Transformer as originally published and verify that its results closely match the published ones. Our implementation achieves a BLEU of 27.69 on WMT’14 En-De, when using multi-bleu.perl from Moses SMT. 8SacreBLEU signature: BLEU+case.mixed+lang.LANG +numrefs.1+smooth.exp+test.TEST+tok.intl+version.1.2.11, with LANG ∈{en-de, de-en, en-fr} and TEST ∈{wmt14/full, iwslt2017/tst2013} 9We attempted to use the publicly available code in Tensor2Tensor, but were unable to successfully train a model. 10The published SAT results use knowledge distillation and k = 6 and compare this model to the SAT trained with k = 2, 4, 6. 4.2 Datasets We experiment with English-German and EnglishFrench datasets, relying on constituency parsers in all three languages. We use the Stanford CoreNLP (Manning et al., 2014) shift-reduce parsers for English, German, and French. For English-German, we evaluate on WMT 2014 En↔De as well as IWSLT 2016 En→De, while for English-French we train on the Europarl / Common Crawl subset of the full WMT 2014 En→Fr data and evaluate over the full dev/test sets. WMT 2014 En↔De consists of around 4.5 million sentence pairs encoded using byte pair encoding (Sennrich et al., 2016) with a shared source-target vocabulary of roughly 37000 tokens. We use the same preprocessed dataset used in the original Transformer paper and also by many subsequent papers that have investigated improving decoding speed, evaluating on the newstest2013 dataset for validation and the newstest2014 dataset for testing. For the IWSLT dataset we use tst2013 for validation and utilize the same hyperparameters as Lee et al. (2018). 4.3 Results Table 1 contains the results on all four datasets. SynST achieves speedups of ∼4 −5× that of the vanilla Transformer, which is larger than nearly all different hyperparameters than the vanilla Transformer, most notably a tenfold decrease in training steps due to initializing from a pre-trained Transformer. 1275 Chunk types SynST predictions with | separating syntax chunks Words repeated in two separate syntax chunks (blue, red) NP1, NP3 NP3, PP4 NP2, PP3 But | it | is | enthusiasm | in | a great enthusiasm ... Enrique | Pena | Nieto | is | facing | a difficult start | on a difficult start Do | you | not | turn | your voters | on your voters Output type Output for a single example SynST reorders syntax chunks, which is fixed with gold parses (GP) as input ground truth SynST predicted parse SynST + GP Canada | was | the first country | to | make | photograph warnings | mandatory in 2001 Canada | was | the first country | in 2001 | to | propose | photographic warnings NP1 VBD1 NP3 PP2 TO1 VB1 NP4 Canada | was | the first country | to | make | photographic warnings | available in 2001 True chunk SynST predictions with @@ as subword divisions Wrong subword completion within a syntax chunk ignores them beforehand examines I | simply | ign@@ it them Most ST@@ I | can | be | cur@@ ed | be@@ foreh@@ ly Beg@@ inning | of | the course | which | exam@@ ates | the ... Table 2: Common error made by SynST due to its syntactically informed semi-autoregressive decoding. Different syntax chunks have been separated by | symbols in all the decoded outputs. of the SAT configurations. Quality-wise, SynST again significantly outperforms the SAT configurations at comparable speedups on all datasets. On WMT En-De, SynST improves by 1 BLEU over LT (20.74 vs LT’s 19.8 without reranking). Comparisons to other published work: As mentioned earlier, we adopt a very strict set of experimental conditions to evaluate our work against LT and SAT. For completeness, we also offer an unscientific comparison to other numbers in Table A1. 5 Analysis In this section, we perform several analysis and ablation experiments on the IWSLT En-De dev set to shed more light on how SynST works. Specifically, we explore common classes of translation errors, important factors behind SynST’s speedup, and the performance of SynST’s parse decoder. 2 4 6 8 10 k 1.5 2.0 2.5 Average Chunk Size Chunk Size given k iwslt_en_de_parsed wmt_en_de_parsed wmt_de_en_parsed wmt_en_fr_parsed Figure 4: The average size of a chunk given a particular value of the max chunk size k. 5.1 Analyzing SynST’s translation quality What types of translation errors does SynST make? Through a qualitative inspection of SynST’s output translations, we identify three types of errors that SynST makes more frequently than the vanilla Transformer: subword repetition, phrasal reordering, and inaccurate subword completions. Table 2 contains examples of each error type. Do we need to include the constituent type in the chunk identifier? SynST’s chunk identifiers contain both the constituent type as well as chunk size. Is the syntactic information actually useful during decoding, or is most of the benefit from the chunk size? To answer this question, we train a variant of SynST without the constituent identifiers, so instead of predicting NP3 VP2 PP4, for example, the parse decoder would predict 3 2 4. This model substantially underperforms, achieving a BLEU of 8.19 compared to 23.82 for SynST, which indicates that the syntactic information is of considerable value. How much does BLEU improve when we provide the ground-truth chunk sequence? To get an upper bound on how much we can gain by improving SynST’s parse decoder, we replace the input to the second decoder with the ground-truth chunk sequence instead of the one generated by the parse decoder. The BLEU increases from 23.8 to 41.5 with this single change, indicating that future work on SynST’s parse decoder could prove very fruitful. 1276 Predicted parse vs. Gold parse (separate) Predicted parse vs. Gold parse (joint) Parsed prediction vs. Gold parse Parsed prediction vs. Predicted parse F1 65.48 69.64 79.16 89.90 Exact match 4.23% 5.24% 5.94% 43.10% Table 3: F1 and exact match comparisons of predicted chunk sequences (from the parse decoder), ground-truth chunk sequences (from an external parser in the target language), and chunk sequences obtained after parsing the translation produced by the token decoder. First two columns show the improvement obtained by jointly training the two decoders. The third column shows that when the token decoder deviates from the predicted chunk sequence, it usually results in a translation that is closer to the ground-truth target syntax, while the fourth column shows that the token decoder closely follows the predicted chunk sequence. 5.2 Analyzing SynST’s speedup What is the impact of average chunk size on our measured speedup? Figure 4 shows that the IWSLT dataset, for which we report the lowest SynST speedup, has a significantly lower average chunk size than that of the other datasets at many different values of k.11 We observe that our empirical speedup directly correlates with the average chunk size: ranking the datasets by empirical speedups in Table 1 results in the same ordering as Figure 4’s ranking by average chunk size. How does the number of layers in SynST’s parse decoder affect the BLEU/speedup tradeoff? All SynST experiments in Table 1 use a single layer for the parse decoder. Table 4 shows that increasing the number of layers from 1 to 5 results in a BLEU increase of only 0.5, while the speedup drops from 3.8× to 1.4×. Our experiments indicate that (1) a single layer parse decoder is reasonably sufficient to model the chunked sequence and (2) despite its small output vocabulary, the parse decoder is the bottleneck of SynST in terms of decoding speed. 5.3 Analyzing SynST’s parse decoder How well does the predicted chunk sequence match the ground truth? We evaluate the generated chunk sequences by the parse decoder to explore how well it can recover the ground-truth chunk sequence (where the “ground truth” is provided by the external parser). Concretely, we compute the chunk-level F1 between the predicted chunk sequence and the ground-truth. We evaluate two configurations of the parse decoder, one in which it is trained separately from the token decoder (first column of Table 3), and the other where both decoders are trained jointly (second column of Ta11IWSLT is composed of TED talk subtitles. A small average chunk size is likely due to including many short utterances. # Layers Max Chunk Size Speedup BLEU 1 k = 6 3.8× 23.82 2 k = 6 2.8× 23.98 3 k = 6 2.2× 24.54 4 k = 6 1.8× 24.04 5 k = 6 1.4× 24.34 1 k ∈{1 . . . 6} 3.1× 25.31 Table 4: Increasing the number of layers in SynST’s parse decoder significantly lowers the speedup while marginally impacting BLEU. Randomly sampling k from {1 . . . 6} during training boosts BLEU significantly with minimal impact on speedup. ble 3). We observe that joint training boosts the chunk F1 from 65.4 to 69.6, although, in both cases the F1 scores are relatively low, which matches our intuition as most source sentences can be translated into multiple target syntactic forms. How much does the token decoder rely on the predicted chunk sequence? If SynST’s token decoder produces the translation “the man went to the store” from the parse decoder’s prediction of PP3 NP3, it has clearly ignored the predicted chunk sequence. To measure how often the token decoder follows the predicted chunk sequence, we parse the generated translation and compute the F1 between the resulting chunk sequence and the parse decoder’s prediction (fourth column of Table 3). Strong results of 89.9 F1 and 43.1% exact match indicate that the token decoder is heavily reliant on the generated chunk sequences. When the token decoder deviates from the predicted chunk sequence, does it do a better job matching the ground-truth target syntax? Our next experiment investigates why the token decoder sometimes ignores the predicted chunk sequence. One 1277 hypothesis is that it does so to correct mistakes made by the parse decoder. To evaluate this hypothesis, we parse the predicted translation (as we did in the previous experiment) and then compute the chunk-level F1 between the resulting chunk sequence and the ground-truth chunk sequence. The resulting F1 is indeed almost 10 points higher (third column of Table 3), indicating that the token decoder does have the ability to correct mistakes. What if we vary the max chunk size k during training? Given a fixed k, our chunking algorithm (see Figure 3) produces a deterministic chunking, allowing better control of SynST’s speedup, even if that sequence may not be optimal for the token decoder. During training we investigate using k′ = min(k, √ T), where T is the target sentence length (to ensure short inputs do not collapse into a single chunk) and randomly sampling k ∈{1 . . . 6}. The final row of Table 4 shows that exposing the parse decoder to multiple possible chunkings of the same sentence during training allows it to choose a sequence of chunks that has a higher likelihood at test time, improving BLEU by 1.5 while decreasing the speedup from 3.8× to 3.1×; this is an exciting result for future work (see Table A3 for additional analysis). 6 Related Work Our work builds on the existing body of literature in both fast decoding methods for neural generation models as well as syntax-based MT; we review each area below. 6.1 Fast neural decoding While all of the prior work described in Section 2 is relatively recent, non-autoregressive methods for decoding in NMT have been around for longer, although none relies on syntax like SynST. Schwenk (2012) translate short phrases non-autoregressively, while Kaiser and Bengio (2016) implement a nonautoregressive neural GPU architecture and Libovick and Helcl (2018) explore a CTC approach. Guo et al. (2019) use phrase tables and word-level adversarial methods to improve upon the NAT model of Gu et al. (2018), while Wang et al. (2019) regularize NAT by introducing similarity and backtranslation terms to the training objective. 6.2 Syntax-based translation There is a rich history of integrating syntax into machine translation systems. Wu (1997) pioneered this direction by proposing an inverse transduction grammar for building word aligners. Yamada and Knight (2001) convert an externally-derived source parse tree to a target sentence, the reverse of what we do with SynST’s parse decoder; later, other variations such as string-to-tree and tree-to-tree translation models followed (Galley et al., 2006; Cowan et al., 2006). The Hiero system of Chiang (2005) employs a learned synchronous context free grammar within phrase-based translation, which follow-up work augmented with syntactic supervision (Zollmann and Venugopal, 2006; Marton and Resnik, 2008; Chiang et al., 2008). Syntax took a back seat with the advent of neural MT, as early sequence to sequence models (Sutskever et al., 2014; Luong et al., 2015) focused on architectures and optimization. Sennrich and Haddow (2016) demonstrate that augmenting word embeddings with dependency relations helps NMT, while Shi et al. (2016) show that NMT systems do not automatically learn subtle syntactic properties. Stahlberg et al. (2016) incorporate Hiero’s translation grammar into NMT systems with improvements; similar follow-up results (Aharoni and Goldberg, 2017; Eriguchi et al., 2017) directly motivated this work. 7 Conclusions & Future Work We propose SynST, a variant of the Transformer architecture that achieves decoding speedups by autoregressively generating a constituency chunk sequence before non-autoregressively producing all tokens in the target sentence. Controlled experiments show that SynST outperforms competing non- and semi-autoregressive approaches in terms of both BLEU and wall-clock speedup on En-De and En-Fr language pairs. While our method is currently restricted to languages that have reliable constituency parsers, an exciting future direction is to explore unsupervised tree induction methods for low-resource target languages (Drozdov et al., 2019). Finally, we hope that future work in this area will follow our lead in using carefully-controlled experiments to enable meaningful comparisons. Acknowledgements We thank the anonymous reviewers for their insightful comments. We also thank Justin Payan and the rest of the UMass NLP group for helpful comments on earlier drafts. Finally, we thank Weiqiu You for additional experimentation efforts. 1278 References Roee Aharoni and Yoav Goldberg. 2017. Towards string-to-tree neural machine translation. In Proceedings of the Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the Association for Computational Linguistics, pages 263–270. Association for Computational Linguistics. David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and structural translation features. In Proceedings of Empirical Methods in Natural Language Processing, pages 224–233. Association for Computational Linguistics. Brooke Cowan, Ivona Kuˇcerov´a, and Michael Collins. 2006. A discriminative model for tree-to-tree translation. In Proceedings of Empirical Methods in Natural Language Processing, pages 232–241. Association for Computational Linguistics. Andrew Drozdov, Patrick Verga, Mohit Yadav, Mohit Iyyer, and Andrew McCallum. 2019. Unsupervised latent tree induction with deep inside-outside recursive autoencoders. In Conference of the North American Chapter of the Association for Computational Linguistics. Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to parse and translate improves neural machine translation. In Proceedings of the Association for Computational Linguistics. Association for Computational Linguistics (ACL). Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of the Association for Computational Linguistics, pages 961–968. Association for Computational Linguistics. Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2018. Non-autoregressive neural machine translation. In Proceedings of International Conference on Learning Representations. Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, and Tie-Yan Liu. 2019. Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input. In Association for the Advancement of Artificial Intelligence. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Conference of the North American Chapter of the Association for Computational Linguistics. Łukasz Kaiser and Samy Bengio. 2016. Can active memory replace attention? In Proceedings of Advances in Neural Information Processing Systems, pages 3781–3789. Lukasz Kaiser, Samy Bengio, Aurko Roy, Ashish Vaswani, Niki Parmar, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast decoding in sequence models using discrete latent variables. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2390–2399, Stockholmsm¨assan, Stockholm Sweden. PMLR. Yoon Kim and Alexander M Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of Empirical Methods in Natural Language Processing, pages 1317–1327. Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Proceedings of Empirical Methods in Natural Language Processing, pages 1173–1182. Jindich Libovick and Jindich Helcl. 2018. End-toEnd Non-Autoregressive Neural Machine Translation with Connectionist Temporal Classification. In Proceedings of Empirical Methods in Natural Language Processing. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of Empirical Methods in Natural Language Processing, pages 1412–1421. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Yuval Marton and Philip Resnik. 2008. Soft syntactic constraints for hierarchical phrased-based translation. Proceedings of the Association for Computational Linguistics, pages 1003–1011. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191. Association for Computational Linguistics. Holger Schwenk. 2012. Continuous space translation models for phrase-based statistical machine translation. Proceedings of International Conference on Computational Linguistics, pages 1071–1080. 1279 Rico Sennrich and Barry Haddow. 2016. Linguistic input features improve neural machine translation. In Proceedings of the First Conference on Machine Translation: Volume 1, Research Papers, volume 1, pages 83–91. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1715–1725. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural mt learn source syntax? In Proceedings of Empirical Methods in Natural Language Processing, pages 1526–1534. F Stahlberg, E Hasler, A Waite, and B Byrne. 2016. Syntactically guided neural machine translation. In Proceedings of the Association for Computational Linguistics, volume 2, pages 299–305. Association for Computational Linguistics. Mitchell Stern, Noam Shazeer, and Jakob Uszkoreit. 2018. Blockwise parallel decoding for deep autoregressive models. In Advances in Neural Information Processing Systems, pages 10106–10115. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-Informed Self-Attention for Semantic Role Labeling. In Proceedings of Empirical Methods in Natural Language Processing, Brussels, Belgium. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of Advances in Neural Information Processing Systems, pages 3104–3112. Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, and Noah A Smith. 2018. Syntactic scaffolds for semantic structures. In Proceedings of Empirical Methods in Natural Language Processing, pages 3772–3782. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Chunqi Wang, Ji Zhang, and Haiqing Chen. 2018. Semi-autoregressive neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 479–488. Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Non-Autoregressive Machine Translation with Auxiliary Regularization. In Association for the Advancement of Artificial Intelligence. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational linguistics, 23(3):377–403. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proceedings of the Association for Computational Linguistics. Andreas Zollmann and Ashish Venugopal. 2006. Syntax augmented machine translation via chart parsing. In Proceedings of the Workshop on Statistical Machine Translation, pages 138–141. Association for Computational Linguistics. 1280 Appendix A Unscientific Comparison We include a reference to previously published work in comparison to our approach. Note, that many of these papers have multiple confounding factors that make direct comparison between approaches very difficult. Model WMT En-De BLEU Speedup LT rescoring top-100 22.5 NAT rescoring top-100 21.54 BPT (k = 6) 28.11 3.10× IRT (adaptive) 21.54 2.39× SAT (k = 6) 23.93 5.58× SynST(k = 6) 20.74 4.86× Table A1: Unscientific comparison against previously published works. The numbers of each model are taken from their respective papers. These previous results often have uncomparable hyperparameters, compute their BLEU with multi-bleu.perl, and/or require additional steps such as knowledge distillation and re-ranking to achieve their reported numbers. Latent Transformer (LT) (Kaiser et al., 2018), Nonautoregressive Transformer (NAT) (Gu et al., 2018), Blockwise parallel Transformer (BPT) (Stern et al., 2018), Iterative refinement Transformer (IRT) (Lee et al., 2018), Semi-autoregressive Transformer (SAT) (Wang et al., 2018). B The impact of beam search In order to more fully understand the interplay of the representations output from the autoregressive parse decoder on the BLEU/speedup tradeoff we examine the impact of beam search for the parse decoder. From Table A2 we see that beam search does not consitently improve the final translation quality in terms of BLEU (it manages to decrease BLEU on IWSLT), while providing a small reduction in overall speedup for SynST. C SAT replication results As part of our work, we additionally replicated the results of (Wang et al., 2018). We do so without any of the additional training stabilization techniques they use, such as knowledge distillation or initializing from a pre-trained Transformer. Without the use of these techniques, we notice that the approach sometimes catastrophically fails to converge to a meaningful representation, leading to sub-optimal translation performance, despite achieving adequate perplexity. In order to report accurate translation performance for SAT, we needed to re-train the model for k = 4 when it produced BLEU scores in the single digits. D Parse performance when varying max chunk size k In Section 5.3 (see the final row of Table 3) we consider the effect of randomly sampling the max chunk size k during training. This provides a considerable boost to BLEU with a minimal impact to speedup. In Table A3 we highlight the impact to the parse decoder’s ability to predict the groundtruth chunk sequences and how faithfully it follows the predicted sequence. 1281 Model Beam WMT En-De WMT De-En IWSLT En-De WMT En-Fr Width BLEU Speedup BLEU Speedup BLEU Speedup BLEU Speedup Transformer 1 25.82 1.15× 29.83 1.14× 28.66 1.16× 39.41 1.18× Transformer 4 26.87 1.00× 30.73 1.00× 30.00 1.00× 40.22 1.00× SAT (k = 2) 1 22.81 2.05× 26.78 2.04× 25.48 2.03× 36.62 2.14× SAT (k = 2) 4 23.86 1.80× 27.27 1.82× 26.25 1.82× 37.07 1.89× SAT (k = 4) 1 16.44 3.61× 21.27 3.58× 20.25 3.45× 28.07 3.34× SAT (k = 4) 4 18.95 3.25× 23.20 3.19× 20.75 2.97× 32.62 3.08× SAT (k = 6) 1 12.55 4.86× 15.23 4.27× 14.02 4.39× 24.63 4.77× SAT (k = 6) 4 14.99 4.15× 19.51 3.89× 15.51 3.78× 28.16 4.19× LT* 19.8 3.89× SynST(k = 6) 1 20.74 4.86× 25.50 5.06× 23.82 3.78× 33.47 5.32× SynST(k = 6) 4 21.61 3.89× 25.77 4.07× 23.31 3.11× 34.10 4.47× Table A2: Controlled experiments comparing SynST to LT and SAT on four different datasets (two language pairs) demonstrate speed and BLEU improvements while varying beam size. Wall-clock speedup is measured on a single Nvidia TitanX Pascal by computing the average time taken to decode a single sentence in the dev/test set, averaged over five runs. Note that the LT results are reported by Kaiser et al. (2018) and not from our own implementation; as such, they are not directly comparable to the other results. Max Chunk Size Predicted parse vs. Gold parse Parsed prediction vs. Gold parse Parsed prediction vs. Predicted parse F1 k = 6 69.64 79.16 89.90 Exact match 5.24% 5.94% 43.10% F1 k ∈{1 . . . 6} 75.35 79.78 95.28 Exact match 4.83% 7.55% 50.15% Table A3: F1 and exact match comparisons of predicted chunk sequences (from the parse decoder), ground-truth chunk sequences (from an external parser in the target language), and chunk sequences obtained after parsing the translation produced by the token decoder. The first column shows how well the parse decoder is able to predict the ground-truth chunk sequence when trained jointly with the token decoder. The second column shows that when the token decoder deviates from the predicted chunk sequence, it usually results in a translation that is closer to the ground-truth target syntax, while the third column shows that the token decoder closely follows the predicted chunk sequence. Randomly sampling k from {1 . . . 6} during training significantly boosts the parse decoder’s ability to recover the ground-truth chunk sequence compared to using a fixed k = 6. Subsequently the token decoder follows the chunk sequence more faithfully.
2019
122
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1282–1292 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1282 Dynamically Composing Domain-Data Selection with Clean-Data Selection by “Co-Curricular Learning” for Neural Machine Translation Wei Wang and Isaac Caswell and Ciprian Chelba Google Research {wangwe,icaswell,ciprianchelba}@google.com Abstract Noise and domain are important aspects of data quality for neural machine translation. Existing research focus separately on domaindata selection, clean-data selection, or their static combination, leaving the dynamic interaction across them not explicitly examined. This paper introduces a “co-curricular learning” method to compose dynamic domain-data selection with dynamic clean-data selection, for transfer learning across both capabilities. We apply an EM-style optimization procedure to further refine the “co-curriculum”. Experiment results and analysis with two domains demonstrate the effectiveness of the method and the properties of data scheduled by the cocurriculum. 1 Introduction Significant advancement has been witnessed in neural machine translation (NMT), thanks to better modeling and data. As a result, NMT has found successful use cases in, for example, domain translation and helping other NLP applications, e.g., (Buck et al., 2018; McCann et al., 2017). As these tasks start to scale to more domains, a challenge starts to surface: Given a source monolingual corpus, how to use it to improve an NMT model to translate same-domain sentences well? Data selection plays an important role in this context. In machine translation, data selection has been a fundamental research topic. One idea (van der Wees et al., 2017; Axelrod et al., 2011) for this problem is to use language models to select parallel data out of a background parallel corpus, seeded by the source monolingual sentences. This approach, however, performs poorly on noisy data, such as large-scale, web-crawled datasets, because data noise hurts NMT performance (Khayrallah and Koehn, 2018). The lower learning curve in Figure 1: BLEU curves over NMT training steps: domaindata selection on Paracrawl English→French data (lower curve) vs. clean-data selection on the same data (upper curve). Setup available in the experiment section. Figure 1 shows the effect of noise on domain-data selection. NMT community has realized the harm of data noise to translation quality, leading to efforts in data denoising (Koehn et al., 2018), as has been popular in computer vision (Hendrycks et al., 2018). The upper curve in Figure 1 shows the effect of clean-data selection on the same noisy data. These denoising methods, however, cannot be directly used for the problem in question as they require trusted parallel data as input. We introduce a method to dynamically combine clean-data selection and domain-data selection. We treat them as independent curricula, and compose them into a “co-curriculum”. We summarize our contributions as: 1. “Co-curricular learning”, for transfer learning across data quality. It extends the single curriculum learning work in NMT and makes the existing domain-data selection method work better with noisy data. 2. A curriculum optimization procedure to refine the co-curriculum. While gaining some 1283 improvement with deep models, it surprisingly improves shallow model by 8-10 BLEU points – We find that bootstrapping seems to “regularize” the curriculum and make it easier for a small model to learn on. 3. We wish our work contributed towards better understanding of data, such as noise, domain, or “easy to learn”, and its interaction with NMT network. 2 Related Work 2.1 Measuring Domain and Noise in Data Data selection for MT usually uses a scoring function to rank sentence pairs. Cross entropy difference (Moore and Lewis, 2010) between two language models is usually used for selecting domain sentences, e.g., (van der Wees et al., 2017; Axelrod et al., 2011). For a source sentence x of length |x|, with a general-domain language model (LM), parameterized as eϑ, and an in-domain LM, bϑ, the domain-relevance of x is calculated as:1 ϕ  x; eϑ, bϑ  = log P  x; bϑ  −log P  x; eϑ  |x| (1) Alternative measures (Wang et al., 2017; Chen and Huang, 2016; Chen et al., 2016) also show effectiveness. With Eq. 1 to select data, the data distribution (domain quality) in the in-domain monolingual data used to train P(x; bϑ) is transferred into the selected data through the scoring. Data selection has also been used for data denoising (Junczys-Dowmunt, 2018; Wang et al., 2018b), by using NMT models and trusted data to measure the noise level in a sentence pair. One such a scoring function uses a baseline NMT, eθ, trained on noisy data and a cleaner NMT, bθ, obtained by fine-tuning eθ on a small trusted parallel dataset, and measures quality in a sentence pair (x, y): φ  x, y; eθ, bθ  = log P  y|x; bθ  −log P  y|x; eθ  |y| (2) Using NMT models for selection can also lead to faster convergence (Wang et al., 2018a). With Eq. 2, the distribution (data quality) in the trusted parallel data is transferred into the selected data. These scoring functions usually use smaller networks. 1 We can use both source and target LMs, but we study the problem where only a source in-domain corpus is available.. 2.2 Curriculum Learning for NMT Curriculum learning (CL) (Bengio et al., 2009) has been used to further improve traditional static selection. In CL, a curriculum, C, is a sequence of training criteria over training steps. A training criterion, Qt(y|x), at step t is associated with a set of weights, Wt(x, y), over training examples (x, y) in a dataset D, where y is the translation for x. Qt(y|x) is a re-weighting of the training distribution P(y|x): Qt (y|x) ∝Wt (x, y) P (y|x) , ∀(x, y) ∈D (3) Hence, for a training with T maximum steps, C is a sequence: C = ⟨Q1, ..., Qt, ..., QT ⟩ (4) At t, an online learner samples data from Qt to train on, resulting in a task (or model), mt. Therefore, C corresponds to a sequence of tasks, M = ⟨m1, ..., mt..., mf⟩, where mf is the final task of interest. Intermediate tasks, mt, are sorted in increasing relevance to mf as a series of “stepping stones” to mf, making curriculum learning a form of transfer learning that transfers knowledge through M to benefit mf. A performance metric P(C, mf) is used to evaluate mf. There has already been rich research in CL for NMT. Fine-tuning a baseline on in-domain parallel data is a good strategy (Thompson et al., 2018; Sajjad et al., 2017; Freitag and Al-Onaizan, 2016). van der Wees et al. (2017) introduce a domain curriculum. Wang et al. (2018b) define noise level and introduce a denoising curriculum. Kocmi and Bojar (2017) use linguistically-motivated features to classify examples into bins for scheduling. Kumar et al. (2019) use reinforcement learning to learn a denoising curriculum based on noise level of examples. Zhang et al. (2018) explore CL in general for NMT and observe faster training convergence. Zhang et al. (2019) use CL to adapt generic NMT models to a specific domain. Platanios et al. (2019) propose a CL framework to simplify and speed up training and achieve better results; a nice study in sampling schedules was carried out. CL therefore is a natural formulation for dynamic online data selection. Our work is built on two types of dynamic data selection: Dynamic domain-data selection and dynamic clean-data selection. The former uses the neural LM (NLM)based scoring function (Eq. 1), which we call 1284 domain curriculum, denoted by Cdomain. The later uses the NMT-based scoring function (Eq. 2), which we call denoising curriculum, denoted by Cdenoise. Ideally, we would have in-domain, trusted parallel data to design a true curriculum, Ctrue, as an assessment oracle: with trusted indomain parallel data, Cdenoise is expected to simultaneously perform domain-data selection and clean-data selection, becoming Ctrue. Mini-batch sampling is important for CL. Several alternatives have been introduced to evolve the training criteria Qt over time (Zhang et al., 2018; Wang et al., 2018b; van der Wees et al., 2017; Kocmi and Bojar, 2017; Platanios et al., 2019). In these curricula, tasks in M are sequenced in order of increasing relevance. Earlier tasks are exposed to a diversity of examples and later tasks progressively concentrate on data subsets more relevant to the final task. 2.3 More Related Work Junczys-Dowmunt (2018) introduces a practical and effective method to combine (static) features for data filtering. Mansour et al. (2011) combine an n-gram LM and IBM translation Model 1 (Brown et al., 1993) for domain data filtering. We compose different types of dynamic online selection rather than combining static features. Back translation (BT), e.g., (Sennrich et al., 2016), is another important approach to using monolingual data for NMT. Here we use monolingual data to seed data selection, rather than generating parallel data directly from it. Furthermore, we study the use of source-language monolingual data, in which case BT cannot be applied directly. 3 Problem Setting ] DXY is a background parallel dataset between languages X and Y . It may be crawled from the web: large (hundreds of millions of pairs), diverse and noisy. DID X is an in-domain monolingual corpus in source language X. It contains thousands to millions of sentences and specifies the testing domain. With DID X , we can train ϕ (Eq. 1) to sort data by domain relevance into a domain curriculum. DID X can be small because we can use it to fine-tune eϑ into bϑ. [ DOD XY is a small, trusted, out-of-domain (OD) parallel dataset. It contains several thousands of pairs or fewer. With [ DOD XY , we can train the φ 3 en→zh sentence pairs: 1 (en) Where is the train station? (zh-gloss) TRAIN STATION IS WHERE? 2 (en) I’d like to have two window seats. (zh-gloss) PLS. BOOK ME TWO WINDOW SEATS. 3 (en) It usually infects people older than 60. (zh-gloss) PEOPLE OLDER THAN 60 USUALLY ARE INFECTED BY IT. W1 →W2 →W3 →W4 Travel domain curri. ϕ(3) < ϕ(2) < ϕ(1)   1/3 1/3 1/3     1/3 1/3 1/3     1/2 1/2  0.0     1.0  0.0  0.0   Denoising curri. φ(2) < φ(1) < φ(3)   1/3 1/3 1/3     1/2  0.0 1/2     1/2  0.0 1/2     1/2  0.0 1/2   Co-curriculum (Our goal)   1/3 1/3 1/3     1/2  0.0 1/2     1.0  0.0  0.0     1.0  0.0  0.0   Table 1: Curriculum and co-curriculum examples generated from a toy dataset. Each is characterized by its re-weighting, Wt, over four steps, to stochastically order data to benefit a final task. ϕ: the domain scoring function (Eq. 1). φ: the denoising scoring function (Eq. 2). Strikethrough marks discarded examples. (Eq. 2) to sort data by noise level into a denoising curriculum. The setup, however, assumes that the indomain, trusted parallel data, [ DID XY , does not exist – Our goal is to use an easily available monolingual corpus and recycle existing trusted parallel data to reduce the cost of curating in-domain parallel data. We are interested in a composed curriculum, Cco, to improve either original curriculum: P (Cco, mf) > P (Cdenoise, mf) (5) P (Cco, mf) > P (Cdomain, mf) (6) We hope P(Cco, mf) ≈P(Ctrue, mf) as if a small in-domain, trusted parallel dataset were available. 4 Co-Curricular Learning Table 1 illustrates the idea with a toy dataset of three examples. Source sentences (en) of examples 1 and 2 are in the travel domain. Example 2 is a noisy translation. Example 3 is well-translated but belongs to the medicine domain. A traveldomain curriculum follows its data re-weighting, Wt, and gradually discards (strikethrough) less in-domain examples, optimizing towards a traveldomain model. The denoising curriculum gradually discards noisy examples to improve general accuracy, without paying special attention to travel 1285 domain. We want to “fuse” these two partial curricula into a co-curriculum to train models progressively on both in-domain and clean examples. We call this co-curricular learning. 4.1 Curriculum Mini-Batching To facilitate the definition of co-curricular learning and following (Platanios et al., 2019; Wang et al., 2018b), we define a dynamic data selection function, Dφ λ(t, D), to return the top λ(t) of examples in a dataset D sorted by a scoring function φ at a training step t. We use λ(t) = 0.5t/H, (0 < λ ≤1), as a pace function to return a selection ratio value that decays over time controlled by a hyper-parameter H.2 During training, Dφ λ(t, D) progressively evolves into smaller subdatasets that are more relevant to the final task using the scoring function. In practice, Dφ λ(t, D′) can be applied on a small buffer D′ of random examples from the much bigger D, for efficient online training. It may also be desirable to set a floor value on λ(t) to avoid potential data selection bias. This is how we implement a curriculum in experiments. We introduce two different co-curricula below. 4.2 Mixed Co-Curriculum (Cmix co ) Mixed co-curriculum, Cmix co , simply adds up the domain scoring function (Eq. 1) and the denoising function (Eq. 2). For a sentence pair (x, y), ψ(x, y) = φ(x, y) + ϕ(x). We then can constrain the re-weighting, Wt(x, y), to assign non-zero weights only to examples in Dψ λ (t, ] DXY ) at a training step. We use uniform sampling. The co-curriculum is thereby fully instantiated based on Eq. 3 and Eq. 4. However, values of φ and ϕ may not be on the same scale or even from the same family of distributions. Therefore, despite its simplicity, Cmix co may not be able to enforce either curriculum sufficiently. 4.3 Cascaded Co-Curriculum (Ccascade co ) Cascaded co-curriculum, Ccascade co , defines two selection functions and nests them. Let β (t) = 0.5t/F and γ (t) = 0.5t/G be two pace functions, implemented similarly to above λ(t), with different hyper-parameters F and G.3 They con2 This is inspired by the exponential learning rate schedule. In the following notations, we omit H for brevity, but the function name implies it. 3 We will omit F, G for brevity, but the function names can indicate them. trol the data-discarding paces for clean-data selection and domain-data selection, respectively. At step t, Dφ β  t, ] DXY  retains the top β (t) of background data ] DXY , sorted by scoring function φ (x, y). Dϕ γ  t, Dφ β  t, ] DXY  retains the top γ (t) of Dφ β  t, ] DXY  , re-sorted by scoring function ϕ (x). That is,  Dϕ γ ◦Dφ β   t, ] DXY  = Dϕ γ  t, Dφ β  t, ] DXY  Then Eq. 3 is redefined into Eq. 4 with uniform sampling:4 Wt (x, y) =    1 |Dϕ γ ◦Dφ β| if (x, y) ∈Dϕ γ ◦Dφ β 0 otherwise (7) Compared to Cmix co , Ccascade co cascades Cdenoise and Cdomain per step. At a time step, both pace functions, in their respective paces, discard examples that become less relevant to their own tasks. All surviving examples then have an equal opportunity to be sampled. Even though uniformly sampled, examples that are more relevant are retained longer in training and thus weighed more over time. Table 1 shows a toy example of how two curricula are composed. At step 1, no example is discarded yet, and all examples have equal sampling opportunity (W1’s). At step 2, the denoising curriculum discards the noisiest example 2, but the domain curriculum still keeps all; So only 1 and 3 are retained in the co-curriculum (W2). In step 3, the domain curriculum discards the least in-domain example 3, so only 1 is left in the cocurriculum now (W3). The denoising curriculum has a slower pace than the domain curriculum. Over the four steps, example 1 is kept longer thus weighed more. 4.4 Curriculum Optimization We further improve the co-curriculum using an EM (Dempster et al., 1977) style optimization procedure in training, as shown in Figure 2. It aims specifically to iteratively improve the denoising selection, without losing quality on the domain selection. 4 Function nesting is asymmetrical, but the uniform sampling seems to make the nesting irrelevant to the nesting order. In experiments, we did not notice empirical differences between nesting one way or the other. 1286 0 ] DXY ϕ  x; eϑ, bϑ  GEN-C Ci co fine-tune eθ with Ci co bθ∗ bθi = bθ∗ i = i + 1 φ  y|x; eθ, bθi  DID X [ DOD XY Figure 2: Co-curricular learning with an EM-style optimization procedure. Thicker arrows form the bootstrapping loop. With ] DXY and DID X , we train a domain scoring function, ϕ(x; eϑ, bϑ). With ] DXY and [ DOD XY , we train a denoising scoring function, φ(y|x; eθ, bθ). The in-domain component bϑ of ϕ or the clean component bϑ of φ are obtained by fine-tuning eϑ or eθ on the respective seed data. These initialize the procedure (iteration 0). At iteration i, we generate a concrete cocurriculum using the dynamic re-weighting, Wt, as defined in Section 4. Let GEN-C denote the curriculum generation process: Cco = GEN-C  ] DXY , φi, ϕ  (8) Then, we fine-tune the original noisy NMT component, eθ, of φ on Cco: bθ∗= arg max bθ P (Cco, mf) (9) bθ∗is used to replace the clean component of φ bθi = bθ∗ i = i + 1 bθi is then compared against the original eθ for scoring. The updated φ and the constant ϕ work together to generate a new co-curriculum in the next iteration going back to Eq. 8. In this process, only the denoising function φ is iteratively updated, made more aware of the domain. We call the procedure EM-style because ] DXY is treated as incomplete without the (hidden) data order. The generated Cco in each iteration sorts the data and thus is viewed as complete. It is then used to train bθ by maximizing the performance of the final task. bθ and Cco bootstrap each other. The process finishes after a pre-defined number of iterations. We use shallow parameterization for scoring functions but we can train a deep model on the final Cco. The process also uses fine-tuning, so it can be run efficiently. In principle, the domain-data scoring function ϕ can be updated in a similar manner, too, by updating its in-domain component, bϑ. This may help when the in-domain monolingual corpus is very small. An alternating optimization process can be used to bootstrap both. We, however, do not investigate this. 5 Experiments 5.1 Setup We consider two background datasets and two test domains, so we have four experiment configurations. Each configuration has as inputs a background dataset, an in-domain source-language corpus and a (small) trusted parallel dataset that is out-of-domain. The inputs of a configuration are shown in Figure 2. As alternative background datasets, we use the English→French Paracrawl data,5 (300 million pairs), and the WMT14 training data (40 million pairs). The former is severely noisier than the later. We adopt sentence-piece model and apply open-source implementation (Kudo, 2018) to segment data into sub-word units with a source-target shared 32000 sub-word vocabulary. We use two test domains: the English→French IWSLT15 test set, in spoken language domain; and the English→French WMT14 test set, in news domain. For IWSLT15, we use the English side of its provided parallel training data (220 thousand examples) as DID X , but use the parallel version as [ DOD XY for the WMT14 domain. The IWSLT14 test set is used for validation. For the WMT14 domain, the provided 28 million English sentences are used as DID X . WMT 2010-2011 test sets are concatenated as [ DID XY for news6, or as [ DOD XY for the above 5 https://paracrawl.eu 6 Strictly speaking, though all are in news, the WMT 2014 monolingual data, the WMT 2011-2012 test sets and the 2014 test set are not necessarily in the exact same news domain. So this news test domain could be treated as a looser case than the IWSLT domain and examines the method at a slightly different position in the spectrum of the problem. 1287 IWSLT15 test domain. So, the trusted data are reversely shared across the two test domains. Additionally, WMT 2012-2013 are used as the validation set for the WMT14 test domain. Our method does not require the in-domain trusted data, but we use it to construct bounds in evaluation. We use RNN-based NMT (Wu et al., 2016) to train models. Model parameterization for θ’s of φ (Eq 2) or ϑ’s ϕ (Eq 1) is 512 dimensions by 3 layers – NLMs are realized using NMT models with dummy source sentences (Sennrich et al., 2016). Deep models are 1024 dimensions by 8 layers. Unless specified, results are reported for deep models. We compute truecased, detokenized BLEU with mteval-v14.pl. Training on Paracrawl uses Adam in warmup and then SGD for a total of 3 million steps using batch size 128, learning rate 0.5 annealed, at step 2 million, down to 0.05. Training on WMT 2014 uses batch size 96, dropout probability 0.2 for a total of 2 million steps, with learning rate 0.5 annealed, at step 1.2 million, down to 0.05, too. No dropout is used in Paracrawl training due to its large data volume. For the pace hyper-parameters (Section 4), we empirically use H = F = 400k, G = 900k. Floor values set for λ, β, γ are top 0.1, 0.2, 0.5 selection ratios, respectively, such that in the cascaded co-curriculum case, the tightest effective percentile value would be the same 0.1 = 0.2 × 0.5, too. All single curriculum experiments use the same pace setting as Cmix. 5.2 Baselines and Oracles We build various systems below as baselines and oracles. Oracle systems use in-domain trusted parallel data. Baselines: 1. Crandom : Baseline model trained on background data with random data sampling. 2. Cdomain: Dynamically fine-tunes Crandom with a domain curriculum (van der Wees et al., 2017). 3. Cdenoise: Dynamically fine-tunes Crandom with a denoising curriculum (Wang et al., 2018b). Oracles: 4. Ctrue: Dynamically fine-tunes Crandom with the true curriculum. 5. ID fine-tune Crandom: Simply fine-tunes Crandom with in-domain (ID) parallel data. Models Test BLEU IWSLT15 WMT14 (P)aracrawl P1: Crandom 34.6 31.6 P2: Cdomain 35.7 32.4 P3: Cdenoise 36.6 33.6 P4: Ctrue 37.2 34.2 P5: ID fine-tune P1 38.5 34.0 (W)MT W1: Crandom 36.5 35.0 W2: Cdomain 37.6 35.9 W3: Cdenoise 37.4 36.0 W4: Ctrue 38.5 36.3 W5: ID fine-tune W1 39.7 35.9 Table 2: Baseline and oracle models trained on Paracrawl data and WMT data, respectively. ID: in-domain. P2,3,4 (or W2,3,4) each dynamically fine-tunes P1 (or W1) with the respective curriculum. Except for P1 and W1, the two BLEU scores in each row are for two different training runs, each focusing on its own test domain (configuration). We’ll see if our method is better than either original curriculum and how close it is to the true curriculum oracle. In most experiments, we fine-tune a warmed-up (baseline) model to compare curricula, for quicker experiment cycles. Baseline and oracle BLEU scores are shown in Table 2. Note that, except for P1 and W1, the two BLEU scores in a row are for two different training runs, each focusing on its own test domain. On either training dataset, domain curriculum, Cdomain, improves baseline, Crandom, by 0.81.1 BLEU (P3 vs P1, W3 vs W1). Cdomain falls behind of Cdenoise on the noisy Paracrawl dataset (P2 vs P3), but delivers matched performance on the cleaner WMT dataset (W2 vs W3) – noise compromises the domain capability. On the WMT training data, Cdenoise improves baselines by about +1.0 BLEU on either test domain (W3 vs W1), and more on the noisier Paracrawl data: +2.0 on either test domain (P3 vs P1). The true curriculum (P4, W4) bounds the performance of Cdomain and Cdenoise. Simple in-domain fine-tuning gives good improvements (P5 vs P1, W5 vs W1). 5.3 Co-Curricular Learning Cascading vs. mixing. Table 3 shows per-step cascaded filtering can work better than flat mixing (P7 vs P6). So we use Ccascade co for the remaining experiments. Curriculum BLEU comparisons. Table 4 shows the effectiveness of co-curricular learning. On Paracrawl, co-curriculum (P7) gives more than +2 BLEU on top of no CL (P1). It improves Cdomain (P7 vs P2) by +1.4 BLEU on IWSLT15 and +1.6 1288 CoCurriculum Test BLEU IWSLT15 WMT14 P6: Cmix co 36.2 33.8 P7: Ccascade co 37.1 34.0 Table 3: Per-step cascading works better than mixing on Paracrawl data. Curriculum Test BLEU IWSLT15 WMT14 P1: Crandom 34.6 31.6 P2: Cdomain 35.7 32.4 P3: Cdenoise 36.6 33.6 P7: Cco 37.1 34.0 Cco −Cdomain + 1.4 +1.6 Cco −Ctrue −0.1 −0.2 W1: Crandom 36.5 35.0 W2: Cdomain 37.6 35.9 W3: Cdenoise 37.4 36.0 W7: Cco 37.8 36.4 Cco −Cdomain +0.2 +0.5 Cco −Ctrue −0.7 +0.1 Table 4: Co-curriculum improves either constituent curriculum and no CL, can be close to the true curriculum on noisy data. BLEU on WMT14. It is better than either constituent curriculum (P2 or P3), close to the true curriculum (P4). On the cleaner WMT training data, cocurriculum (W7) improves either constituent curricula (W2 and W3) by smaller gains than Paracrawl: +0.2 BLEU on IWSLT15 and +0.4 on WMT14. Compared to Ctrue W5, co-curriculum W7 falls behind (-0.7 BLEU) on IWSLT15 and matches (+0.1 BLEU) on WMT14. So Cco outperforms either constituent curriculum, as we target in Section 3. In both background data cases, using in-domain trusted parallel data to build oracles (P5, W5) are more effective than selecting data in our setup. 5.4 Effect of Curriculum Optimization We further bootstrap the co-curriculum with the EM-style optimization procedure (Figure 2) for three iterations for all four configurations. Shallow models. We use the translation performance of the clean component P(y|x; bθ) in scoring function φ (Eq. 2) as an indicator to the quality of Cco per iteration. Figure 3 shows that the BLEU scores of P(y|x; bθ) steadily become better by iterations.7 bθ has 512 dimensions and 3 lay7 They also include two initialization points: the noisy eθ, and the initial clean bθ obtained by fine-tuning eθ on the clean data. baseline clean EM-1 EM-2 EM-3 22 24 26 28 30 32 34 Iteration BLEU IWSLT15 WMT14 Figure 3: The EM-style optimization has a big impact on small-capacity models, measured in BLEU. Experiments were carried out on Paracrawl data. Curriculum Test BLEU IWSLT15 WMT14 P2: Cdomain 35.7 32.4 P7: Cco 37.1 34.0 P8: P7+Optimization 37.3 34.6 P8 - Cdomain +1.6 +2.0 W2: Cdomain 37.6 35.9 W7: Cco 37.8 36.4 W8: W7+Optimization 37.8 36.5 W8 - Cdomain +0.2 +0.6 Table 5: EM-style optimization further improves domain curriculum. But, overall, it has a small impact on deep models. ers. Surprisingly, EM-3 improves baseline by +10 BLEU on IWSLT15, +8.2 BLEU on WMT14 and performs better than fine-tuning baseline with the clean, out-of-domain parallel data we have. They even reach the performance of Crandom (P1) that uses a much deeper model (1024 dimensions x 8 layers) trained on the vanilla data. Deep models. Table 5 shows the BLEUs of deep models (1024 dimensions x 8 layers) trained on the final co-curriculum. P8 performs slightly better than the non-bootstrapped version P7 on Paracrawl: +0.6 BLEU on WMT14 test and +0.2 on IWSLT15 test. The differences on the WMT data appear to be smaller (W8 vs. W7). So, curriculum bootstrapping has a small impact overall on deep models. Why the difference? Why is there such a difference? We analyze the properties of the cocurriculum. Each curve in Figure 4 corresponds to a single curriculum that simulates the online data selection from looser selection (left x-axis) to moretightened selection (right x-axis). During the course of a single CL, the curriculum pushes “harder” examples with higher per-word loss (than 1289 0 20 40 60 80 1.00 2.00 3.00 4.00 Filtering Percentage (%) ((1 - Selection Ratio)*100) Per-word Loss baseline clean EM-1 EM-2 EM-3 Figure 4: Curriculum learning and optimization push “easier-to-learn” (lower per-word loss) examples to late curriculum (right) and harder examples (higher per-word loss) to early curriculum (left). 0 20 40 60 80 2.00 2.50 3.00 3.50 Filtering Percentage (%) ((1 - Selection Ratio)*100) Standard Deviation baseline clean EM-1 EM-2 EM-3 Figure 5: Curriculum learning and optimization push “regularized” (lower variance) examples to late curriculum and higher-variance examples to early curriculum. baseline) to the early curriculum phase (for exploration), and “easier-to-learn” examples with lower per-word loss to the late curriculum phase (for exploitation). Over iterations, a later-iteration curriculum schedules even easier examples than a previous iteration at late curriculum. The story happens reversely at early curriculum due to probability mass conservation. Figure 5 shows a similar story regarding per-word loss variance. So, curriculum optimization “regularizes” the curriculum and makes it easier-to-learn towards the end of CL. These may be important for a small-capacity model to learn efficiently. The fact that the deep model is not improved as much means that ‘clean’ may have taken most of the headroom for deep models. Meanwhile, according to Figure 6, each individual curriculum concentrates more on news indomain examples as training progresses. Over iterations, bootstrapping makes the co-curriculum more news-domain aware. Due to the use of the 20 40 60 80 −2.00 −1.50 −1.00 −0.50 0.00 0.50 Filtering Percentage (%) ((1 - Selection Ratio)*100) News-Domain Relevance baseline clean EM-1 EM-2 EM-3 Figure 6: The denoising curriculum is made more aware of news-domain after iterations. Figure drawn for the (Paracrawl, news) configuration. Within a single curriculum, ‘baseline’ randomly shuffles data, thus flat curve. ‘clean’ uses the out-of-domain clean parallel data, thus not that much news relevance. All curves show negative news-domain relevance, indicating lack of news data in Paracrawl data. Curriculum Test BLEU IWSLT15 WMT14 P8: Fine-tune with Cco 37.3 34.6 P9: Retrain with Cco 37.9 35.6 W8: Fine-tune with Cco 37.8 36.5 W9: Retrain with Cco 38.1 36.3 Table 6: Retraining with a curriculum may work better than fine-tuning with it, on a large, noisy dataset. denoising curriculum, data in curriculum becomes cleaner, too. So, although the co-curriculum schedules data from hard to easier-to-learn, which seems opposite to the general CL, it also schedules data from less in-domain to cleaner and more in-domain, which captures the spirit of CL. 5.5 Retraining On Paracrawl, retraining NMT with co-curriculum improves dynamic fine-tuning, as shown in Table 6 (P9 vs. P8): +0.6 BLEU on IWSLT15 and +1.0 BLEU on WMT14. On WMT14 training data, retraining (W9) seems to perform similarly to fine-tuning on a warmed-up model (W8): +0.3 on IWSLT15 but -0.2 on WMT14; We speculate that this may be due to the smaller WMT training data size. 5.6 Dynamic vs. Static Data Selection Co-curricular learning is dynamic. How does being dynamic matter? Table 7 shows that finetuning on the top 10% data8 static selection (P10, W10) gives good improvements over baselines P1, W1, but co-curriculum (P9, W9) may do better. 8 This is the ratio where the pace function reaches the floor value in training (see end of Section 5.1). 1290 Model Test BLEU IWSLT15 WMT14 P1: Crandom 34.6 31.6 P9: Curriculum (Dynamic) 37.9 35.6 P10: Static selection 36.8 34.6 W1: Crandom 36.5 35.0 W9: Curriculum (Dynamic) 38.1 36.3 W10: Static selection 37.4 36.2 Table 7: Curriculum learning works slightly better than finetuning a warmed-up model with a top static selection. Model Test BLEU IWSLT15 WMT14 P9: Retrain with curriclum 37.9 35.6 P11: Retrain with static sel. 37.1 34.6 W9: Retrain with curriculum 38.1 36.3 W11: Retrain on static sel. 34.0 31.7 Table 8: Curriculum learning works better than retraining with a static, top selection, especially when the training dataset is small. This confirms findings by (van der Wees et al., 2017). What if we retrain on the static data, too? In Table 8, W11 vs. W9 shows that retrained models on the static data is far behind for the WMT14 training – top 10% selection has only 4 million examples. On Paracrawl, P11 vs. P9 are closer, but retraining on co-curriculum performs still better. In all cases, co-curricular learning gives the best results. We may tune the static selection for better results, but then it is the exact point of CL, to evolve the data re-weighting without the need of a hard cutoff on selection ratio. 5.7 Discussion Evidence of data-quality transfer. Figure 7 visualizes that CL in one domain (e.g., web) may enable CL in another. This is the foundation of our proposed method. To draw the figure, using a random sample of 2000 pairs from WMT training data and some additional in-domain parallel data, we sort examples by tightening the selection ratio according to a true web curriculum. The web curve shows the co-relation between selection ratio and data relevance to web. The same data order appears to yield increasing relevance to other domains, too, with bigger effect on a closer ‘news’ domain, but smaller effect on ‘patent’ and ‘short’ (sentences). Regularizing data without a teacher. The analysis in Section 5.4 shows that the denoising scoring function and its bootstrapped versions tend to 20 40 60 80 −0.04 −0.02 0.00 0.02 0.04 0.06 Filtering Percentage (%) ((1 - Selection Ratio)*100) Domain Relevance news patent short web Figure 7: Curriculum learning in one domain may enable curriculum learning in another. regularize the late curriculum and make the scheduled data easier for small models to learn on. One potential further application of this data property may be in learning a multitask curriculum where regular data may be helpful for multiple task distributions to work together in the same model. This has been achieved by knowledge distillation in existing research (Tan et al., 2019), by regularizing data with a teacher – We could instead regularize data by example selection, without a teacher. We leave this examination for future research. Pace function hyper-parameters. In experiments, we found that data-discarding pace functions seem to work best when they simultaneously decay down to their respective floors. Adaptively adjusting them seems an interesting future work. 6 Conclusion We present a co-curricular learning method to make domain-data selection work better on noisy data, by dynamically composing it with clean-data selection. We show that the method improves over either constituent selection and their static combination. We further refine the co-curriculum with an EM-style optimization procedure and show its effectiveness, in particular on small-capacity models. In future, we would like to extend the method to handle more than two curricula objectives. Acknowledgments The authors would like to thank Yuan Cao for his help and advice, the three anonymous reviewers for their constructive reviews, Melvin Johnson for early discussions, Jason Smith, Orhan Firat, Macduff Hughes for comments on an earlier draft, and Wolfgang Macherey for his early work on the topic. 1291 References Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 355–362. Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26 th International Conference on Machine Learning, page 8696, Montreal, Canada. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Comput. Linguist., 19(2):263– 311. Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Wojciech Pawe Gajewski, Andrea Gesmundo, Neil Houlsby, and Wei Wang. 2018. Ask the right questions: Active question reformulation with reinforcement learning. Boxing Chen and Fei Huang. 2016. Semi-supervised convolutional networks for translation adaptation with tiny amount of in-domain data. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL), pages 314–323. Boxing Chen, Roland Kuhn, George Foster, Colin Cherry, and Fei Huang. 2016. Bilingual methods for adaptive training data selection for machine translation. In AMTA. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B, 39(1):1–38. Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. CoRR, abs/1612.06897. Dan Hendrycks, Mantas Mazeika, Duncan Wilson, and Kevin Gimpel. 2018. Using trusted data to train deep networks on labels corrupted by severe noise. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 10456–10465. Curran Associates, Inc. Marcin Junczys-Dowmunt. 2018. Dual conditional cross-entropy filtering of noisy parallel corpora. In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, pages 901–908, Belgium, Brussels. Association for Computational Linguistics. Huda Khayrallah and Philipp Koehn. 2018. On the impact of various types of noise on neural machine translation. CoRR, abs/1805.12282. Tom Kocmi and Ondˇrej Bojar. 2017. Curriculum learning and minibatch bucketing in neural machine translation. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 379–386. INCOMA Ltd. Philipp Koehn, Huda Khayrallah, Kenneth Heafield, and Mikel L. Forcada. 2018. Findings of the wmt 2018 shared task on parallel corpus filtering. In Proceedings of the Third Conference on Machine Translation, Belgium, Brussels. Association for Computational Linguistics. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75. Association for Computational Linguistics. Gaurav Kumar, George Foster, Colin Cherry, and Maxim Krikun. 2019. Reinforcement learning based curriculum optimization for neural machine translation. In 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics. Saab Mansour, Joern Wuebker, and Hermann Ney. 2011. Combining translation and language model scoring for domain-specific data filtering. In International Workshop on Spoken Language Translation, pages 222–229. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 6294– 6305. Curran Associates, Inc. Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In Proceedings of the ACL 2010 Conference, pages 220– 224. Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnab´as P´oczos, and Tom M. Mitchell. 2019. Competence-based curriculum learning for neural machine translation. In 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics. Hassan Sajjad, Nadir Durrani, Fahim Dalvi, Yonatan Belinkov, and Stephan Vogel. 2017. Neural machine translation training in a multi-domain scenario. arXiv preprint arXiv:1708.08712v2. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving Neural Machine Translation Models with Monolingual Data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1292 86–96, Berlin, Germany. Association for Computational Linguistics. Xu Tan, Yi Ren, Di He, Tao Qin, and Tie-Yan Liu. 2019. Multilingual neural machine translation with knowledge distillation. In International Conference on Learning Representations. Brian Thompson, Huda Khayrallah, Antonios Anastasopoulos, Arya D. McCarthy, Kevin Duh, Rebecca Marvin, Paul McNamee, Jeremy Gwinnup, Tim Anderson, and Philipp Koehn. 2018. Freezing subnetworks to analyze domain adaptation in neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 124–132. Association for Computational Linguistics. Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1482–1488. Association for Computational Linguistics. Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2018a. Dynamic sentence sampling for efficient training of neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 298–304, Melbourne, Australia. Association for Computational Linguistics. Wei Wang, Taro Watanabe, Macduff Hughes, Tetsuji Nakagawa, and Ciprian Chelba. 2018b. Denoising neural machine translation training with trusted data and online data selection. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 133–143. Association for Computational Linguistics. Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2017. Dynamic data selection for neural machine transaltion. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1400–1410. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Xuan Zhang, Gaurav Kumar, Huda Khayrallah, Kenton Murray, Jeremy Gwinnup, Marianna J. Martindale, Paul McNamee, Kevin Duh, and Marine Carpuat. 2018. An empirical exploration of curriculum learning for neural machine translation. CoRR, abs/1811.00739. Xuan Zhang, Pamela Shapiro, Gaurav Kumar, Paul McNamee, Marine Carpuat, and Kevin Duh. 2019. Curriculum learning for domain adaptation in neural machine translation. In 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics.
2019
123
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1293–1303 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1293 On the Word Alignment from Neural Machine Translation∗ Xintong Li1, Guanlin Li2, Lemao Liu3, Max Meng1, Shuming Shi3 1The Chinese University of Hong Kong 2Harbin Institute of Technology 3Tencent AI Lab {znculee, epsilonlee.green}@gmail.com {redmondliu, shumingshi}@tencent.com [email protected] Abstract Prior researches suggest that neural machine translation (NMT) captures word alignment through its attention mechanism, however, this paper finds attention may almost fail to capture word alignment for some NMT models. This paper thereby proposes two methods to induce word alignment which are general and agnostic to specific NMT models. Experiments show that both methods induce much better word alignment than attention. This paper further visualizes the translation through the word alignment induced by NMT. In particular, it analyzes the effect of alignment errors on translation errors at word level and its quantitative analysis over many testing examples consistently demonstrate that alignment errors are likely to lead to translation errors measured by different metrics. 1 Introduction Machine translation aims at modeling the semantic equivalence between a pair of source and target sentences (Koehn, 2009), and word alignment tries to model the semantic equivalence between a pair of source and target words (Och and Ney, 2003). As a sentence consists of words, word alignment is conceptually related to machine translation and such a relation can be traced back to the birth of statistical machine translation (SMT) (Brown et al., 1993), where word alignment is the basis of SMT models and its accuracy is generally helpful to improve translation quality (Koehn et al., 2003; Liu et al., 2005). In neural machine translation (NMT), it is also important to study word alignment, because word alignment provides natural ways to understanding black-box NMT models and analyzing their translation errors (Ding et al., 2017). Prior researches ∗Work done while X. Li interning at Tencent AI Lab. L. Liu is the corresponding author. observed that word alignment is captured by NMT through attention for recurrent neural network based NMT with a single attention layer (Bahdanau et al., 2014; Mi et al., 2016; Liu et al., 2016; Li et al., 2018). Unfortunately, we surprisingly find that attention may almost fail to capture word alignment for NMT models with multiple attentional layers such as TRANSFORMER (Vaswani et al., 2017), as demonstrated in our experiments. In this paper, we propose two methods to induce word alignment from general NMT models and answer a fundamental question that how much word alignment NMT models can learn (§ 3). The first method explicitly builds a word alignment model between a pair of source and target word representations encoded by NMT models, and then it learns additional parameters for this word alignment model with the supervision from an external aligner similar to Mi et al. (2016) and Liu et al. (2016). The second method is more intuitive and flexible: it is parameter-free and thus does not need retraining and external aligner. Its key idea is to measure the prediction difference of a target word if a source word is removed, inspired by Arras et al. (2016) and Zintgraf et al. (2017). Experiments on an advanced NMT model show that both methods achieve much better word alignment than the method by attention (§ 4.1). In addition, our experiments demonstrate that NMT captures good word alignment for those words mostly contributed from source (CFS), while their word alignment is much worse for those words mostly contributed from target (CFT). This finding offers a reason why advanced NMT models delivering excellent translation capture worse word alignment than statistical aligners in SMT, which was observed in prior researches yet without deep explanation (Tu et al., 2016; Liu et al., 2016). Furthermore, we understand and interpret NMT from the viewpoint of word alignment induced 1294 from NMT (§ 4.2). Unlike existing researches on interpreting NMT by accessing few examples as case study (Ding et al., 2017; Alvarez-Melis and Jaakkola, 2017), we aim to provide quantitatively analysis for interpreting NMT by accessing many testing examples, which makes our findings more general. To this end, we firstly compare the effects of both approaches to interpreting NMT and find the prediction difference is better for understanding NMT. Consequently, we propose to quantitatively analyze the translation errors by using alignment from prediction difference. Since it is far from trivial to measure the translation errors at the word level, we design experiments by using two metrics to detect translation errors. Our empirical results consistently show that wrong alignment is more likely to induce the translation errors meanwhile right alignment favors to encourage the translation quality. Our analysis further suggest that word alignment errors for CFS words are responsible for translation errors in some extent. This paper makes the two-fold contributions: • It systematically studies word alignment from NMT and proposes two approaches to induce word alignment which are agnostic to specific NMT models. • It understands NMT from the viewpoint of word alignment and investigates the effect of alignment errors on translation errors via quantitative analysis over many testing examples. 2 Preliminaries 2.1 Neural Machine Translation Given a source sentence x = ⟨x1, · · · , x|x|⟩and a target sentence y = ⟨y1, · · · , y|y|⟩, NMT aims at maximizing the following conditional probabilities: 1 P (y | x) = |y| Q i=1 P (yi | y<i, x) = |y| Q i=1 P yi | sL i  , (1) where y<i = ⟨y1, . . . , yi−1⟩denotes a prefix of y with length i −1, and sL i is the final decoding state of yi. Generally, the conditional distribution P yi | sL i  is somehow modeled within an 1Throughout this paper, bold font such as x denotes a sequence while regular font such as x denotes an element which may be a scalar x, vector x or matrix X. encoder-decoder framework. In encoding stage, the source sentence x is encoded as a sequence of hidden vectors h by an encoder according to specific NMT models, such as a multi-layer encoder consisting of recurrent neural network (RNN), convolutional neural network (CNN), or self-attention layer. In decoding stage, each decoding state in lth layer sl i is computed as follows: sl i = f  sl−1 i , sl <i, cl i  , s0 i = yi, (2) where l ∈{1, . . . , L}, yi is the word embedding of word yi, f is a general function dependent on a specific NMT model, cl i is a context vector in lth layer, computed from h and sl <i according to different NMT models. As the dominant models, attentional NMT models define the context vector cl i as a weighted sum of h, where the weight αl i = g  sl−1 i , sl <i, h  is defined by a similarity function. Due to the space limitation, we refer readers to Bahdanau et al. (2014), Gehring et al. (2017) and Vaswani et al. (2017) for the details on the definitions of f and g. 2.2 Alignment by Attention Since the attention weight αl i,j measures the similarity between sl−1 i and hj, it has been widely used to evaluate the word alignment between yi and xj (Bahdanau et al., 2014; Ghader and Monz, 2017). Once an attentional NMT model has been trained, one can easily extract word alignment A from the attention weight α according to the style of maximum a posterior strategy (MAP) as follows: Ai,j(α) =    1 j = arg max j′ αi,j′ 0 o/w , (3) where Ai,j = 1 indicates yi aligns to xj. For NMT models with multiple attentional heads attentional layers as in Vaswani et al. (2017), we sum all attention weights with respect to all heads to a single α before MAP in equation 3. 3 Methods to Inducing Word Alignment Although attention might obtain some word alignment as described in previous section, it is unknown whether NMT models contain more word alignment information than that obtained by attention. In addition, the method using attention is useful to induce word alignment for attentional 1295 NMT models, whereas it is useless for general NMT models. In this section, in order to induce word alignment from general NMT models, we propose two different methods, which are agnostic to specific NMT models. 3.1 Alignment by Explicit Alignment Model Given a source sentence x, a target sentence y, following Liu et al. (2005) and Taskar et al. (2005), we explicitly define a word alignment model as follows: P (xj | yi; W ) = exp (δ (xj, yi; W )) Pm j′=1 exp δ xj′, yi; W , (4) where δ (xj, yi; W ) is a distance function parametrized by W . Ideally, δ is able to include arbitrary features such as IBM model 1 similar to Liu et al. (2005). However, as our goal is not to achieve the best word alignment but to focus on that captured by an NMT model, we only consider these features completely learned in NMT. Hence, we define the δ (xj, yi; W ) = (xj∥hj)⊤W yi∥sL i  , (5) where xj and yi are word embeddings of xj and yi learned in NMT, hj is the hidden unit of xj in the encoding network and sL i is the hidden unit of yj in the decoding network, ∥denotes the concatenation of a pair of column vectors of dimension d, and W is a matrix of dimension 2d × 2d. The explicit word alignment model is trained by maximizing the objective function with respect to the parameter matrix W : max W X ∀j,i:Aref ij =1 log P (xj | yi; W ) , (6) where Aref ij is the reference alignment between xj and yi for a sentence pair x and y. As the number of elements in W is up to one million (i.e., (2 × 512)2), it is not feasible to train it using a small dataset with gold alignment. Therefore, following Mi et al. (2016) and Liu et al. (2016), we run statistical word aligner such as FAST ALIGN (Dyer et al., 2013) on a large corpus and then employ resulting word alignment as the silver alignment Aref for training. Note that our goal is to quantify word alignment learned by an NMT model, and thus we only treat W as the parameter to be learned, which differs from the joint training all parameters including those from NMT models as in Mi et al. (2016) and Liu et al. (2016). After training, one obtains the optimized W and then easily infers word alignment for a test sentence pair ⟨x, y⟩via the MAP strategy as defined in equation 3 by setting αi,j′ = P xj′ | yi; W  . Note that if word embeddings and hidden units learned by NMT models capture enough information for word alignment, the above method can obtain excellent word alignment. However, because the dataset for supervision in training definitely include some data intrinsic word alignment information, it is unclear how much word alignment is only from NMT models. Therefore, we propose the other method which is parameter-free and only dependent on NMT models themselves. 3.2 Alignment by Prediction Difference The intuition to this method is that if yi aligns to xj, the relevance between yi and xj should be much higher than that between yi and any other xk with k ̸= j. Therefore, the key to our method is that how to measure the relevance between yi and xj. Sampling method Zintgraf et al. (2017) propose a principled method to measure the relevance between a pair of tokens in input and output. It is estimated by measuring how the prediction of yi in the output changes if the input token xj is unknown. Formally, the relevance between yi and xj for a given sentence pair ⟨x, y⟩is defined as follows: R (yi, xj) = P (yi | y<i, x) −P yi | y<i, x\j  , (7) with P yi | y<i, x\j  = X x P x | y<i, x(j,∅)  P yi | y<i, x(j,x)  ≈Ex∼P(x)  P yi | y<i, x(j,x)  , (8) where x(j,x) denotes the sequence by replacing xj with x in x, particularly x(j,∅) denotes the sequence by removing xj from x, P(yi | y<i, x) is defined in equation 1 and P x | y<i, x(j,∅)  is approximated by the empirical distribution P(x), which can be considered as the 1-gram language model for the source side of the training corpus. Unlike a computer vision task in Zintgraf et al. (2017), the size of source vocabulary in NMT is 1296 up to 30000 and thus summation over this large vocabulary is challenging in computational efficiency. As a result, we only sample multiple words to approximate the expectation in equation 8 by Monte Carlo (MC) approach. Deterministic method Inspired by the idea of dropout (Srivastava et al., 2014), we measure the relevance by disabling the connection between xj and the encoder network in a deterministic way. Formally, R (yi, xj) is directly defined via dropout effect on xj as follows: R (yi, xj) = P (yi | y<i, x)−P yi | y<i, x(j,0)  , (9) where x(j,0) denotes the sequence by replacing xj with a word whose embedding is a zero vector. In this way, the computation in equation 9 is much faster than the Monte Carlo sampling approach involving multiple samples. It is worth mentioning that equation 9 resembles the Monte Carlo sampling approach with a single sample in calculation, but it is significantly better than MC with a single sample in alignment quality and is very close to MC approach with enough samples, as to be shown in our experiments. Note that the relevance R(yi, xj) ∈[−1, 1], where R(yi, xj) = 1 means ith target word is totally determined by the jth source word; R(yi, xj) = −1 means ith target word and jth source word are mutual exclusive; R(yi, xj) = 0 means jth source word do not affect generating ith target word. To obtain word alignment for a given sentence pair ⟨x, y⟩, after collecting R(yi, xj) one can easily infer word alignment via the MAP strategy as defined in equation 3 by setting αi,j′ = R(yi, xj′). Remark The above R(yi, xj) in equation 7 quantifies the relevance between a target word yi and a source word xj. Similarly, one can quantify the relevance between yi and its history word yk as follows: Ro (yi, yk) = P (yi | y<i, x)−P  yi | y<i(k,0), x  , (10) where Ro indicates the relevance between two target words yi and yk with k < i, and P(yi | y<i(k,0), x) is obtained by disabling the connection between yk and the decoder network, similarly to P yi | y<i, x(j,0)  . Unlike R(yi, xj) capturing word alignment information, Ro(yi, yk) is able to capture word allocation in a target sentence and it will be used to answer a fundamental question why NMT models yields better translation yet worse word alignment compared with SMT in section of experiments. 4 Experiments In this section, we conduct extensive experiments on ZH⇒EN and DE⇒EN translation tasks to evaluate different methods for word alignment induced from the NMT model and compare them with a statistical alignment model FAST ALIGN (Dyer et al., 2013). Then, we use the induced word alignment to understand translation errors both qualitatively and quantitatively. The alignment performance is evaluated by alignment error rate (AER) (Mihalcea and Pedersen, 2003; Koehn, 2009). The proposed methods are implemented on top of TRANSFORMER (Vaswani et al., 2017) which is a state-ofthe-art NMT system. We report AER on NIST05 test set and RWTH data, whose reference alignment was manually annotated by experts (Liu et al., 2016; Ghader and Monz, 2017). More details on data and training these systems are described in Appendix A. 4.1 Inducing Word alignment from NMT Attention Since the bilingual corpus intrinsically includes word alignment in some extent, word alignment by attention should be better than the data intrinsic alignment if attention indeed captures alignment. To obtain the data intrinsic word alignment, we calculate pointwise mutual information (PMI) from the bilingual corpus and then infer word alignment for each bilingual sentence by using the MAP strategy as in equation 3. 2 It is astonishing that word alignment by attention is inconsistent for different layers of TRANSFORMER, although attention in a single layer TRANSFORMER obtains decent word alignment. Referring to Figure 1, for models more than two layers, alignment captured by attention on middle layer(s) is reasonable, but that on low or high layer is obviously worse than PMI. The possible reasons can be explained as follows. The possible functionality of lower layers might be constructing gradually better contextual representation of the word at each position as suggested in recent contextualized embedding works (Peters et al., 2018; Devlin et al., 2018; Radford et al., 2019). So 2 More details in Appendix B. 1297 1 2 3 4 5 6 layer L1 L2 L3 L4 L5 L6 Transformer 56.49 47.80 77.38 55.07 48.40 84.85 64.88 49.32 52.81 90.11 71.31 53.71 49.84 77.55 92.36 73.29 52.66 45.22 51.19 86.92 92.13 0 65.66(PMI) 100 (a) ZH⇒EN 1 2 3 4 5 6 layer L1 L2 L3 L4 L5 L6 Transformer 48.55 45.37 67.20 71.68 46.72 91.02 77.34 47.79 49.47 91.71 85.98 67.09 50.07 67.05 93.16 88.41 72.11 56.68 53.98 82.40 93.97 0 52.95(PMI) 100 (b) DE⇒EN Figure 1: AER of attention at each layer on TRANSFORMER with different number of layers. AER of PMI is shown as white. Blue and red means AER is better and worse than PMI respectively. the AERs become better while more unambiguous representations of the corresponding word are formed. However, for higher layers the representational redundancy is accumulated (Voita et al., 2019; Michel et al., 2019) for phrases or other larger meaning spans in the input, so attention is not capturing word-to-word align but more complicated semantic correspondence. Methods Tasks ZH⇒EN DE⇒EN FAST ALIGN 36.57 26.58 Attention mean 56.44 74.59 Attention best 45.22 53.98 EAM 38.88 39.25 PD 41.77 42.81 * Results are measured on TRANSFORMER-L6. Table 1: AER of the proposed methods. Models TRANSFORMER L1 L2 L3 L4 L5 L6 AER 54.50 47.94 40.47 38.40 38.80 38.88 BLEU 36.51 44.83 45.63 47.19 46.35 46.95 * Results are measured on ZH⇒EN task. Table 2: EAM on translation models with different number of layer. Explicit Alignment Model (EAM) As shown in Table 1, EAM outperforms alignment induced from attention by a large margin. However, since EAM employs silver alignment annotations from FAST ALIGN for training the additional parameters, its final AER includes contributions from both the aligned data and the model. To eliminate contribution from the data, we investigate the AERs over different pre-trained translation models with their EAMs trained on the same FAST ALIGN annotated data. We find that a stronger (higher BLEU) translation model generally obtains better alignment (lower AER). As shown in Table 2, TRANSFORMER-L6 generates much better alignment than TRANSFORMER-L1, highly correlated with their translation performances. This suggests that supervision is not enough to obtain good alignment and the hidden units learned by a translation model indeed implicitly capture alignment knowledge by learning translation. In addition, EAM can be thought as a kind of agnostic probe (Belinkov et al., 2017; Hewitt and Manning, 2019) to investigate how much alignment are implicitly learned in the hidden representations. Prediction Difference (PD) As shown in Table 1, PD also delivers better word alignment than attention. PD can be implemented by sampling method or deterministic method. As shown in Table 3, the alignment performance of sampling method is improving as growing of the sample size, because the accuracy of Monte Carlo approach is dependent on the number of samples. And no matter what sample size is, the variance of AER is always ignorable. The reason might be that the arg max operation in equation 3 eliminates the fluctuation of probability matrix. Although using large sample size can achieve nice alignment performance, it is costly in computation. Fortunately, the deterministic method, which employs a single zero embedding rather than embedding of random words, can also achieve nice alignment performance with the same computa1298 Methods Sampling method Deterministic method Sample size 1 2 4 20 50 AER 44.92 43.30 42.42 41.83 41.73 41.77 Variance 0.004 < 10−5 < 10−5 < 10−5 < 10−5 N/A * Results are measured on TRANSFORMER-L6 and ZH⇒EN task Table 3: Comparison between sampling and deterministic methods for prediction difference. tional. One possible reason is that using zero embedding in inference is exactly the same way as dropout in training, making the trained parameters perform well in inference. In the rest of this paper, we employ the deterministic version as the default for PD in this paper. Alignment on CFT words It is well-known that NMT outperforms SMT a lot in translation, and thus it is natural to ask why NMT yields worse alignment than the aligner FAST ALIGN in SMT, as shown in Table 1. Because the probability of a target word typically employs the mixed contributions from both source and target sides, NMT may capture good alignment for the target words mostly contributed from source (CFS, such as content words) while bad alignment for the target words mostly contributed from target (CFT, such as function words). To this end, we divide the target words into two categories: for a given sentence pair ⟨x, y⟩, CFS and CFT are formally defined as two sets containing the target word yi satisfies following conditions respectively, maxx∈x R(yi, x) −maxy∈y<i Ro(yi, y) > ϵ, maxy∈y<i Ro(yi, y) −maxx∈x R(yi, x) > ϵ, (11) where ϵ ∈[0, 1) is a probability margin between CFS and CFT words. After dividing the target words into two categories of CFS and CFT words according to the criterion defined above, 3 we evaluate alignment performance for each category and the results are shown in Table 4. We find that NMT indeed captures better alignment for CFS words than the alignment for CFT words, and FAST ALIGN generates much better alignment than NMT for CFT words. Therefore, this fact indicates that CFT words are the reason why NMT generate worse alignment than FAST ALIGN. 3Without affecting main conclusions, ϵ = 0 in this experiment for covering all words in analysis. Experiments with different margins are in Appendix C. Methods Target Words Tasks ZH⇒EN DE⇒EN PD ALL 41.77 42.81 CFS 32.97 33.86 CFT 63.28 65.24 EAM ALL 38.88 39.25 CFS 34.44 36.03 CFT 49.73 47.34 FAST ALIGN ALL 36.57 27.05 CFS 31.02 22.56 CFT 50.80 38.48 * For both tasks the ratio between CFS word count and CFT word count is about 7 : 3. Table 4: AER of CFS and CFT words. Confidence-binned AER Since confidence can reflect translation quality to some extent, we also use the confidence of each target word (the predictive probability) during forced decoding to group the targets into ten bins and report the AER of them in Figure 2. We can find the AER generally decreases as the probability increases. This also indicates that alignment analysis on real translation instead of ground truth may lead to more reliable conclusion since beam search always finds high confidence candidates. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Probability 25 30 35 40 45 50 55 AER PD:ZH EN EAM:ZH EN FA:ZH EN (a) ZH⇒EN 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Probability 20 30 40 50 60 AER PD:DE EN EAM:DE EN FA:DE EN (b) DE⇒EN Figure 2: Confidence-binned AER on the two datasets. 4.2 Understanding NMT via PD Alignment Which method is better for understanding? Previous experiments mainly consider the alignment for the reference, and show that EAM is better at aligning a reference word to source words than PD. However, in order to better understand 1299 sān xía gōng chéng dì xìa dìan zhàn jí jīang kāi gōng jìan shè 三峡 工程 地下 电站 即将 开工 建设 Three gorges project ’s underground powerhouse to Three gorges project ’s underground powerhouse construction start construction begin construction R: T: (a) Forced Decoding Error bā xiè sī gǔ dāng xuǎn luó mǎ ní yà zǒng tǒng chóu zǔ zhèng fǔ miàn lín tiǎo zhàn 巴谢斯古 当选 罗马尼亚 总统 筹组 政府 面临 挑战 Basescu elected romanian president , faces challenge of forming goverment Romanian president elected to form goverment T: R: (b) Real Decoding Error Figure 3: Two examples of showing the translation errors caused by word alignment errors both in forced decoding and real decoding on TRANSFORMER-L6. Red arrow means wrong alignment while Green arrow means the golden alignment. red word means translation error. ‘R’ denotes reference sentence and ‘T’ denotes translation sentence. the translation process of a NMT model, it is helpful to analyze the alignment of real translations derived from the NMT model itself. This is also in accordance with the confidence-binned observation previously. The alignment of the real translation actually provides some insight on the causal relationship among source and target words. To obtain AER on real decoding, we manually annotate word alignment of the real translations for 200 source sentences randomly selected from the ZH⇒EN test set. As shown in Table 5, PD yields better alignment for the real translation than EAM, and we even surprisingly find that its alignment performance is better than FAST ALIGN. 4 This quantitative finding demonstrates PD is better for understanding the real translation in general rather than only for some special case. Models AER PD & TRANSFORMER-L6 20.44 EAM & TRANSFORMER-L6 29.77 FAST ALIGN 25.23 * Results are measured on sampled 200 sentences of ZH⇒EN task, and golden alignment for real translation are human labeled (Appendix D) Table 5: Alignment of Real Translation. It is worth noting that EAM does not only deliver worse word alignment for real translations than PD, but also be dangerous to understand NMT through its word alignment. The main reason is that EAM relies on an external aligned dataset with supervision from statistical word aligner FAST ALIGN, and thus the characteristic of its alignment result are similar to that of FAST ALIGN, leading to the understanding biased to FAST ALIGN. In contrast, PD only relies on prediction from a neural model to define the relevance, it has been successfully used to understand 4The numbers in Table 5 are not comparable to those in Table 1 and Figure 2, because they employ different translations in the target side leading to different ground-truth alignments, which are crucial for evaluating alignment. and interpret a neural model (Zintgraf et al., 2017). Therefore, in the rest of this subsection, we try to understand NMT by using PD both qualitatively and quantitatively. Analyze translation errors in forced decoding We consider the forced decoding translation error as follows. We fix the translation history as the prefix of the reference y<i at each timestep i and then check whether the 1-best word ˆyi = arg maxy P(y|y<i, x) is exactly yi. If ˆyi ̸= yi we say the NMT model makes an error decision at this timestep. We give a case of this kind of error in Figure 3(a). After visualizing the alignment of yi by PD, we find that its alignment in red color is not correct compared to the ground-truth alignment in green color. As a result, the NMT model can not capture the sufficient context to accurately predict the reference word yi and thereby generates an incorrect word ‘construction’. Besides the case study, we try to quantitatively interpret that alignment errors may lead to translation errors. To this end, we divide all timesteps from the reference of the test dataset into two categories, i.e. one with right alignment and the other with wrong alignment. Then we calculate the forced decoding translation error rates for each category, i.e. the ratio between the number of timesteps making error decisions in one category and the total number of timesteps, as depicted in Table 6. From the table, it is clear that wrong alignment is more likely to cause a translation error while correct alignment is likely to make a correct translation decision. Particularly, compared with right alignment, when alignment is wrong, the forced decoding translation error rate of CFS words increases much more than CFT words (∆). This observation indicates word alignment errors of CFS words are mainly responsible for translation errors instead of CFT words. 1300 zhèng hé shì shì jiè zhù míng háng hǎi jiā 郑 和是 世界 著名 航海家 Zheng he is a world famous navigator (a) Gold Alignment zhèng hé shì shì jiè zhù míng háng hǎi jiā 郑 和是 世界 著名 航海家 Zheng he is a world famous navigator (b) Alignment to Source Side zhèng hé shì shì jiè zhù míng háng hǎi jiā 郑 和是 世界 著名 航海家 Zheng he is a world famous navigator (c) Alignment with CFS & CFT Figure 4: An example of word alignment and translation produced by TRANSFORMER-L6. Red arrow means wrong alignment and blue arrow means the prediction is attributed to a target word. The word in light font do not align to any source word, while red word means wrong translation. Tasks Target Words Right Alignment Wrong Alignment ∆ ZH⇒EN ALL 34.87 49.24 14.37 CFS 35.34 53.91 18.57 CFT 32.86 43.99 11.13 DE⇒EN ALL 23.63 35.64 11.01 CFS 24.21 38.25 14.04 CFT 26.40 32.38 5.98 * Results are measured on TRANSFORMER-L6. Table 6: Forced decoding translation error rate for CFS/CFT words with right/wrong alignment. Analyze translation errors in real decoding Besides the forced decoding translation error, we care more about search-aware statistics in real decoding. Specifically, we identify words in the reference which are recalled through the real translation, and those unrecalled words are called real decoding translation errors defined as {y} \ {ˆy}, the difference between the two sets where {y} is the set of words in y. As shown in the case in Figure 3(b), the identified translation error ‘faces’ is wrongly aligned by PD to ‘b¯a xi`e s¯i gˇu’, which may strongly correlate to the under translation of ‘mi`an l´ın’ at the source side. Tasks Target Words Right Alignment Wrong Alignment ∆ ZH⇒EN ALL 31.72 40.73 9.01 CFS 31.03 41.44 10.41 CFT 34.67 39.92 5.25 DE⇒EN ALL 23.84 40.09 16.25 CFS 22.31 39.04 16.73 CFT 30.53 41.40 10.87 * Results are measured on TRANSFORMER-L6. Table 7: Real decoding translation error rate for CFS/CFT words with right/wrong alignment. For quantitative analysis, the same as the forced decoding, we split all target words into two parts, i.e. right alignment and wrong alignment, and then we evaluate the real decoding translation error rate for each of them via P i |{yi} \ {ˆyi}|/ P i |{yi}|. As shown in Table 7, there is an obvious gap between the real decoding translation error of right alignment and wrong alignment, which shows alignment errors have adverse effect on translation quality. For CFS and CFT words, Table 7 demonstrates that alignment errors cause decreasing of translation quality for both sets. Same as forced decoding, the real decoding translation error are also mainly attributed to CFS words. This suggests improving the ability of learning word alignment for CFS words is potential to improve translation quality for neural machine translation. Interpret Translation via CFT Alignment As the translation error has been shown related to the alignment error, the translation success can also be understood by word alignment. Previous research (Ding et al., 2017; Alvarez-Melis and Jaakkola, 2017) have attempted to interpret the decision-making of translation by aligning target words to source words. However, there is nonignorable amount of translated target words are mostly contributed from target side instead of source side. As shown in Figure 4(a), as a functional word, ‘a’ should not be aligned to any source word. However, in Figure 4(b) PD incorrectly aligned ‘a’ to ‘h´ang hˇai j¯ia’ by only considering the contributions from the source side, and this leads to a misunderstanding for why ‘a’ is translated. Fortunately, according to equation 11, PD is good at distinguishing where the contributions come from for both source and target sides. As shown in Figure 4(c), considering alignment of words in CFS, ‘a’ is superbly not aligned to any source word because it belongs to CFT and should be aligned to ‘is’, which explains why NMT correctly translates ‘a’. Although the ambiguous Chinese word ‘h´e’ mostly means ‘and’, TRANSFORMER is able to translate it perfectly as a given name ‘h´e’ as shown 1301 in Figure 4(c). 5 The main reason is that NMT captures the context of the surname ‘zheng’ by PD over target side besides the context of ‘h´e’ by PD over source side, thanks to its more powerful language model effect. 5 Related Work In NMT, there are many notable researches which mention word alignment captured by attention in some extent. For example, Bahdanau et al. (2014) is the first work to show word alignment examples by using attention in an NMT model. Tu et al. (2016) quantitatively evaluate word alignment captured by attention and find that its quality is much worse than statistical word aligners. Motivated by this finding, Chen et al. (2016), Mi et al. (2016) and Liu et al. (2016) improve attention with the supervision from silver alignment results obtained by statistical aligners, in the hope that the improved attention leads to better word alignment and translation quality consequently. More recently, there are also works (Alkhouli et al., 2018) that directly model the alignment and use it to sharpen the attention to bias translation. Despite the close relation between word alignment and attention, Koehn and Knowles (2017) and Ghader and Monz (2017) discuss the differences between word alignment and attention in NMT. Most of these works study word alignment for the same kind of NMT models with a single attention layer. One of our contribution is that we propose modelagnostic methods to study word alignment in a general way which deliver better word alignment quality than attention method. Moreover, for the first time, we further understand NMT through alignment and particularly quantify the effect of alignment errors on translation errors for NMT. The prediction difference method in this paper actually provides an avenue to understand and interpret neural machine translation models. Therefore, it is closely related to many works on visualizing and interpreting neural networks (Lei et al., 2016; Bach et al., 2015; Zintgraf et al., 2017). Indeed, our method is inherited from (Zintgraf et al., 2017), and our advantage is that it is computationally efficient particularly for those tasks with a large vocabulary. In sequence-to-sequence tasks, Ding et al. (2017) focus on model interpretability by modeling how influence propagates across 5It is interesting that SMT (MOSES) incorrectly translates this word into ‘and’ in our preliminary experiment. hidden units in networks, which is often too restrictive and challenging to achieve as argued by Alvarez-Melis and Jaakkola (2017). And instead, Alvarez-Melis and Jaakkola (2017) concentrate on prediction interpretability with only oracle access to the model generating the prediction. To achieve this effect, they propose a casual learning framework to measure the relevance between a pair of source and target words. Our method belongs to the type of prediction interpretability similar to Alvarez-Melis and Jaakkola (2017), but ours is a unified and parameter-free method rather than a pipeline and parameter-dependent one. In addition, both Ding et al. (2017) and Alvarez-Melis and Jaakkola (2017) qualitatively demonstrate interpretability by showing some sentences, while we exhibit the interpretability by quantitatively analyzing all sentences in a test set. 6 Conclusions and Future Work This paper systematically studies the word alignment from NMT. It firstly reveals that attention may not capture word alignment for an NMT model with multiple attentional layers. Therefore, it proposes two methods (explicit model and prediction difference) to acquire word alignment which are agnostic to specific NMT models. Then it suggests prediction difference is better for understanding NMT and visualizes NMT from word alignment induced by prediction difference. In particular, it quantitatively analyzes that alignment errors which are likely to lead to translation errors at word level measured by different metrics. In the future, we believe more work on improving CFS alignment is potential to improve translation quality, and we will investigate on using source context and target history context in a more robust manner for better predicting CFS and CFT words. Acknowledgements We would like to thank all the anonymous reviewers for their valuable suggestions. This research was supported by Tencent AI Lab, Hong Kong RGC GRF grant # 14200618, Hong Kong ITC ITSP Tier 2 grant # ITS/105/18FP and Shenzhen Science and Technology Innovations project JCYJ20170413161616162. 1302 References Tamer Alkhouli, Gabriel Bretschner, and Hermann Ney. 2018. On the alignment problem in multihead attention-based neural machine translation. In WMT. David Alvarez-Melis and Tommi S Jaakkola. 2017. A causal framework for explaining the predictions of black-box sequence-to-sequence models. arXiv preprint arXiv:1707.01943. Leila Arras, Franziska Horn, Gr´egoire Montavon, Klaus-Robert M¨uller, and Wojciech Samek. 2016. Explaining predictions of non-linear classifiers in nlp. arXiv preprint arXiv:1606.07298. Sebastian Bach, Alexander Binder, Gr´egoire Montavon, Frederick Klauschen, Klaus-Robert M¨uller, and Wojciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James R. Glass. 2017. What do neural machine translation models learn about morphology? In ACL. Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational linguistics, 19(2):263–311. Wenhu Chen, Evgeny Matusov, Shahram Khadivi, and Jan-Thorsten Peter. 2016. Guided alignment training for topic-aware neural machine translation. In Proceedings of AMTA. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Yanzhuo Ding, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Visualizing and understanding neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1150– 1159. Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann Dauphin. 2017. Convolutional sequence to sequence learning. In ICML. Hamidreza Ghader and Christof Monz. 2017. What does attention in neural machine translation pay attention to? arXiv preprint arXiv:1710.03348. John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Philipp Koehn. 2009. Statistical machine translation. Cambridge University Press. Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Workshop on Neural Machine Translation. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of NAACL-HLT, pages 48–54. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In In Proceedings of EMNLP. Xintong Li, Lemao Liu, Zhaopeng Tu, Shuming Shi, and Max Meng. 2018. Target foresight based attention for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1380–1390. Lemao Liu, Masao Utiyama, Andrew Finch, and Eiichiro Sumita. 2016. Neural machine translation with supervised attention. In Proceedings of COLING. Yang Liu, Qun Liu, and Shouxun Lin. 2005. Log-linear models for word alignment. In Proceedings of ACL, pages 459–466. Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Supervised attentions for neural machine translation. In Proceedings of EMNLP. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? arXiv preprint arXiv:1905.10650. Rada Mihalcea and Ted Pedersen. 2003. An evaluation exercise for word alignment. In Proceedings of the HLT-NAACL 2003 Workshop on Building and using parallel texts: data driven machine translation and beyond-Volume 3, pages 1–10. Association for Computational Linguistics. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics, 29(1):19–51. 1303 Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matthew Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1:8. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Ben Taskar, Simon Lacoste-Julien, and Dan Klein. 2005. A discriminative matching approach to word alignment. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 76–85. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multihead self-attention: Specialized heads do the heavy lifting, the rest can be pruned. arXiv preprint arXiv:1905.09418. Luisa M Zintgraf, Taco S Cohen, Tameem Adel, and Max Welling. 2017. Visualizing deep neural network decisions: Prediction difference analysis. arXiv preprint arXiv:1702.04595.
2019
124
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1304–1312 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1304 Imitation Learning for Non-Autoregressive Neural Machine Translation Bingzhen Wei1, Mingxuan Wang, Hao Zhou, Junyang Lin1,3, Xu Sun1,2 1MOE Key Lab of Computational Linguistics, School of EECS, Peking University 2Deep Learning Lab, Beijing Institute of Big Data Research, Peking University 3School of Foreign Languages, Peking University {weibz,linjunyang,xusun}@pku.edu.cn [email protected], [email protected] Abstract Non-autoregressive translation models (NAT) have achieved impressive inference speedup. A potential issue of the existing NAT algorithms, however, is that the decoding is conducted in parallel, without directly considering previous context. In this paper, we propose an imitation learning framework for nonautoregressive machine translation, which still enjoys the fast translation speed but gives comparable translation performance compared to its auto-regressive counterpart. We conduct experiments on the IWSLT16, WMT14 and WMT16 datasets. Our proposed model achieves a significant speedup over the autoregressive models, while keeping the translation quality comparable to the autoregressive models. By sampling sentence length in parallel at inference time, we achieve the performance of 31.85 BLEU on WMT16 Ro→En and 30.68 BLEU on IWSLT16 En→De. 1 Introduction Neural machine translation (NMT) with encoderdecoder architectures (Sutskever et al., 2014; Cho et al., 2014) achieve significantly improved performance compared with traditional statistical methods(Koehn et al., 2003; Koehn, 2010). Nevertheless, the autoregressive property of the NMT decoder has been a bottleneck of the translation speed. Specifically, the decoder, whether based on Recurrent Neural Network (RNN) (Hochreiter and Schmidhuber, 1997; Cho et al., 2014) or attention mechanism (Vaswani et al., 2017), sequentially generates words. The latter words are conditioned on previous words in a sentence. Such bottleneck disables parallel computation of decoder, which is serious for NMT, since the NMT decoding with a large vocabulary is extremely time-consuming. Recently, a line of research work (Gu et al., 2017; Lee et al., 2018; Libovick and Helcl, 2018; (a) Autoregressive NMT (b) Non-Autoregressive NMT Figure 1: Neural architectures for Autoregressive NMT and Non-Autoregressive NMT. Wang et al., 2018) propose to break the autoregressive bottleneck by introducing non-autoregressive neural machine translation (NAT). In NAT, the decoder generates all words simultaneously instead of sequentially. Intuitively, NAT abandon feeding previous predicted words into decoder state at the next time step, but directly copy source encoded representation (Gu et al., 2017; Lee et al., 2018; Guo et al., 2018; Wang et al., 2019) as inputs of the decoder. Thus, the generation of the NAT models does not condition on previous prediction. NAT enables parallel computation of decoder, giving significantly fast translation speed with moderate accuracy (always within 5 BLEU). Figure 1 shows the difference between autoregressive and non-autoregressive models. However, we argue that current NAT approaches suffer from delayed supervisions (or rewards) and large search space in training. NAT decoder simultaneously generates all words of the translation, the search space of which is very large. For one time step, decoding states across layers (more than 16 layers) and time steps could be regarded as a 2-dimensional sequential decision process. Every decoding state has not only 1305 to decide which part of target sentence it will focus on, but also to decide the correct target word of that part. All decisions are made by interactions with other decoding states. Delayed supervisions (correct target word) will be obtained by decoding states in the last layer, and intermediate decoding states will be updated by gradient propagation from the last layer. Therefore, the training of NAT is non-trivial and it may be hard for NAT to achieve a good model, which is the same case that reinforcement learning (Mnih et al., 2013, 2015) is hard to learn with large search space. The delayed supervision problem is not severe for autoregressive neural machine translation(AT) because it predicts words sequentially. Given the previous words, contents to be predicted at current step are relatively definite, thus the search space of AT is exponentially lower than NAT. We blame the delayed supervision and large search space for the performance gap between NAT and AT. In this paper, we propose a novel imitation learning framework for non-autoregressive NMT (imitate-NAT ). Imitation learning has been widely used to alleviate the problem of huge search space with delayed supervision in RL. It is straightforward to bring the imitation learning idea for boosting the performance of NAT. Specifically, we introduce a knowledgeable AT demonstrator to supervise each decoding state of NAT model. In such case, Specifically, We propose to employ a knowledgeable AT demonstrator to supervise every decoding state of NAT across different time steps and layers, which works pretty well practically. Since the AT demonstrator is only used in training, our proposed imitate-NAT enjoys the high speed of NAT without suffering from its relatively lower translation performance. Experiments show that our proposed imitateNAT is fast and accurate, which effectively closes the performance gap between AT and NAT on several standard benchmarks, while maintains the speed advantages of NAT (10 times faster). On all the benchmark datasets, our imitate-NAT with LPD achieves the best translation performance, which is even close to the results of the autoregressive model. 2 Background In the following sections, we introduce the background about Autoregressive Neural Machine Translation and Non-Autoregressive Neural Machine Translation. 2.1 Autoregressive Neural Machine Translation Sequence modeling in machine translation has largely focused on autoregressive modeling which generate a target sentence word by word from left to right, denoted by pθ(Y |X), where X = {x1 · · · , xT } and Y = {y1, · · · , yT ′} represent the source and target sentences as sequences of words respectively. θ is a set of parameters usually trained to minimize the negative loglikelihood: LAT = − T ′ X i=1 log p(yi|y<i, X). (1) where T and T ′ is the length of the source and the target sequence respectively. Deep neural network with autoregressive framework has achieved great success on machine translation, with different choices of architectures. The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT. Despite the recent success, the inherently sequential architecture prevents RNMTs from being parallelized during training and inference. Following RNMT, CNNs and self-attention based models have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices. However, the autoregressive nature still creates a bottleneck at inference stage, since without ground truth, the prediction of each target token has to condition on previously predicted tokens. 2.2 Non-Autoregressive Neural Machine Translation As a solution to the issue of slow decoding, Gu et al. (2017) recently proposed non-autoregressive model (NAT) to break the inference bottleneck by exposing all decoder inputs to the network simultaneously. NAT removes the autoregressive connection directly and factorizes the target distribution into a product of conditionally independent per-step distributions. The negative loglikelihood loss function for NAT model become is then defined as: LNAT = − T ′ X i=1 log p(yi|X). (2) 1306 Figure 2: Illustration of the proposed model, where the black solid arrows represent differentiable connections and the dashed arrows are non-differentiable operations. Without loss of generality, this figure shows the case of T=3, T’=4. The left side of the figure is the DAT model and the right side is the imitate-NAT . The bottom is the encoder and the top is the decoder. The internal details of Imitation Module are shown in Figure 3. The approach breaks the dependency among the target words across time, thus the target distributions can be computed in parallel at inference time. In particular, the encoder stays unchanged from the original Transformer network. A latent fertility model is then used to copy the sequence of source embeddings as the input of the decoder. The decoder has the same architecture as the encoder plus the encoder attention. The best results were achieved by sampling fertilities from the model and then rescoring the output sentences using an autoregressive model. The reported inference speed of this method is 2-15 times faster than a comparable autoregressive model, depending on the number of fertility samples. This desirable property of exact and parallel decoding however comes at the expense of potential performance degradation. Since the conditional dependencies within the target sentence (yt depends on y<t) are removed from the decoder input, the decoder is not powerful enough to leverage the inherent sentence structure for prediction. Hence the decoder has to figure out such target-side information by itself just with the source-side information during training, which leads to a larger modeling gap between the true model and the neural sequence model. Therefore, strong supervised signals could be introduced as the latent variable to help the model learn better internal dependencies within a sentence. In AT models, the generation of the current token is conditioned on previously generated tokens , which provides strong target side context information. In contrast, NAT models generate tokens in parallel, thus the target-side dependency is indirect and weak. Consequently, the decoder of a NAT model has to handle the translation task conditioned on less and weaker information compared with its AT counterpart, thus leading to inferior accuracy. 3 Proposed Method: imitate-NAT In this section, we propose an imitation learning framework (imitate-NAT ) to close the performance gap between the NAT and AT. 1307 (a) Imitation module of DAT (b) Imitation module of imitate-NAT Figure 3: The imitation module of AT demonstrator and NAT learner. 3.1 Preliminary of imitate-NAT We bring the intuition of imitation learning to nonautoregressive NMT and adapt it to our scenario. Specifically, the NAT model can be regarded as a learner, which will imitate a knowledgeable demonstrator at each decoding state across layers and time steps. However, obtaining an adequate demonstrator is non-trivial. We propose to employ an autoregressive NMT model as the demonstrator, which is expected to offer efficient supervision to each decoding state of the NAT model. Fortunately, the AT demonstrator is only used in training, which guarantees that our proposed imitateNAT enjoys the high speed of NAT model without suffering from its relatively lower performance. In following parts, we will describe the AT demonstrator and the NAT learner in our imitateNAT framework, respectively. 3.2 AT Demonstrator For the proposed AT, we apply a variant of the transformer model as the demonstrator, named DAT. The encoder stays unchanged from the original Transformer network. A crucial difference lies in that the decoder introduces the imitation module which emits actions at every time step. The action brings sequential information, thus can be used as the guidance signal during the NAT training process. The input of each decoder layer Oℓ = {oℓ 1, oℓ 2, · · · , oℓ T ′} can be considered as the observation (or environment) of the IL framework, where ℓdonates the layer of the observation. Let Aℓ= {aℓ 1, aℓ 2, · · · , aℓ T ′} ∈A denotes an action sequence from the action space A. The action space A is finite and its size n is a hyperparameter, representing the number of action categories. The distribution of the action of DAT can be then fed to the NAT model as the training signal. Let Π denotes a policy class, where each πℓ∈Π generates an action distribution sequence Aℓin response to a context sequence Oℓ. Predicting actions Aℓmay depend on the contexts of previous layer Oℓand policies πℓcan thus be viewed as mapping states to actions. A roll-out of π given the context sequence Oℓto determine the action sequence Aℓ, which is: at = arg max(πℓ(oℓ t)) (3) where πℓ(oℓ t) = softmax(FFN(oℓ t)). (4) The distribution πℓ(oℓ t) represents the probability of the decision depends on the current state or environment oℓ t. The discrete operation arg max(·) suffers from the non-differentiable problem which makes it impossible to train the policy from an end to end framework. Note that unlike the general reinforcement or imitation learning framework, we consider to compute the action state which as the expectation of the embedding of the action at: uℓ t = Eaℓ t∼πℓ(oℓ t)δ(aℓ t), (5) where δ(aℓ t) ∈Rk returns the embedding of the action aℓ t and k denotes the embedding dimension. The states of next layer are then based on the current output of the decoder state and the emitted action state: oℓ+1 t = Transfer(uℓ t + oℓ t), (6) where Transfer(·) denotes the vanilla transformer decoding function including a self-attention layer, an encoder-decoder attention layer and followed by a FFN layer (Vaswani et al., 2017). 3.2.1 Action Distribution Regularization The supervised signal for the action distribution π(ot) is not direct in NAT, thus the action prediction can be viewed as an unsupervised clustering problem. One potential issue is the unbalanced distribution of action. Inspired by Xie et al. (2016), we introduce a regularization method to 1308 increase the space utilization. Formally, an moving average c is applied to calculate the cumulative activation level for each action category: c ←α · c + (1 −α) T ′ X t=1 π(ot)/T ′ (7) We set α 0.9 in our experiments. Then π′(oi) can be re-normalized with the cumulative history c: π′(ot) = π(ot)2/c P j π(ot)2 j/cj (8) The convex property of the quadratic function can adjust the distribution to achieve the purpose of clustering. The role of c is to redistribute the probability distribution of π(ot), which leads to a more balanced category assignment. We define our objective as a KL divergence loss between π(ot) and the auxiliary distribution π′(ot) as follows: Lπ = X t π′(ot) log π′(ot) π(ot) (9) 3.3 NAT learner 3.3.1 Soft Copy To facility the imitation learning process, our imitate-NAT is based on the AT demonstrator described in section 3.2. The only difference lies in that the initialization of the decoding inputs. Previous approaches apply a UniformCopy method to address the problem. More specifically, the decoder input at position t is the copy of the encoder embedding at position Round(T ′t/T) (Gu et al., 2017; Lee et al., 2018). As the source and target sentences are often of different lengths, AT model need to predict the target length T ′ during inference stage. The length prediction problem can be viewed as a typical classification problem based on the output of the encoder. we follow Lee et al. (2018) to predict the length of the target sequence. The proposed Round function is unstable and non-differentiable, which make the decoding task difficult. We therefore propose a differentiable and robust method named SoftCopy following the spirit of the attention mechanism (Hahn and Keller, 2016; Bengio, 2009). The weight wi,j depends on the distance relationship between the source position i and the target position j. wij = softmax(−|j −i|/τ) (10) τ is a trainable parameters used to adjust the degree of focus when copying. Then the input of the target at position j can be computed as : yj = T X i=0 wijxi, (11) where xi is usually the source embedding at position i. It is also worth mentioning that we take the top-most hidden states instead of the word embedding as xi in order to cache the global context information. 3.3.2 Learning from AT Experts The conditional independence assumption prevents NAT model from properly capturing the highly multimodal distribution of target translations. AT models takes already generated target tokens as inputs, thus can provide complementary extension information for NAT models. A straightforward idea to bridge the gap between NAT and AT is that NAT can actively learn the behavior of AT step by step. The AT demonstrator generate action distribution πAT (O) ∈Rn as the posterior supervisor signal. We expect the supervision information can guide the generation process of NAT. The imitateNAT exactly follows the same decoder structure with our AT demonstrator, and emits distribution πNAT (O) ∈Rn to learn from AT demonstrator step by step. More specifically, we try to minimize the cross entropy of the distributions between the two policies: LIL = H(πAT (ot), πNAT (ot)) (12) = −EπAT (ot) log πNAT (ot) (13) 3.4 Training In the training process, the action distribution regularization term described in 3.2.1 is combined with the commonly used cross-entropy loss in Eq. 1: L∗ AT = LAT + λ1Lπ (14) For NAT models, the imitation learning term are combined with the commonly used cross-entropy loss in Eq. 2: L∗ NAT = LNAT + λ2LIL (15) where λ1 and λ2 are hyper-parameters, which are set to 0.001 in our experiments. 1309 Models WMT14 WMT16 IWSLT16 En→De De→En En→Ro Ro→En En→De Speedup Transformer (Vaswani et al., 2017) 27.41 31.29 / / 30.90 1.00× AT Demonstrator 27.80 31.25 33.70 32.59 30.85 1.05× NAT-FT(Gu et al., 2017) 17.69 21.47 27.29 29.06 26.52 15.60× NAT-FT(+NPD s=10) 18.66 22.41 29.02 30.76 27.44 7.68× NAT-FT(+NPD s=100) 19.17 23.20 29.79 31.44 28.16 2.36× NAT-IR(idec = 1) 13.91 16.77 24.45 25.73 22.20 8.90× NAT-IR(idec = 10) 21.61 25.48 29.32 30.19 27.11 1.50× LT 19.80 / / / / 5.78× LT(rescoring 10) 21.0 / / / / / LT(rescoring 100) 22.5 / / / / / NAT without imitation 19.69 22.71 / / 25.34 18.6× imitate-NAT 22.44 25.67 28.61 28.90 28.41 18.6× imitate-NAT (+LPD,∆T = 3) 24.15 27.28 31.45 31.81 30.68 9.70× Table 1: The test set performances of AT and NAT models in BLEU score. NAT-FT, NAT-IR and LT denotes the competitor method in (Gu et al., 2017), (Lee et al., 2018) and (Kaiser et al., 2018) respectively. imitate-NAT is our proposed NAT with imitation learning. 4 Experiments We evaluate our proposed model on machine translation tasks and provide the analysis. We present the experimental details in the following, including the introduction to the datasets as well as our experimental settings. Datasets We evaluate the proposed method on three widely used public machine translation corpora: IWSLT16 En-De(196K pairs), WMT14 EnDe(4.5M pairs) and WMT16 En-Ro(610K pairs). All the datasets are tokenized by Moses Koehn et al. (2007) and segmented into 32k−subword symbols with byte pair encoding Sennrich et al. (2016) to restrict the size of the vocabulary. For WMT14 En-De, we use newstest-2013 and newstest-2014 as development and test set respectively. For WMT16 En-Ro, we use newsdev-2016 and newstest-2016 as development and test sets respectively. For IWSLT16 En-De, we use test2013 as validation for ablation experiments. Knowledge Distillation Datasets Sequencelevel knowledge distillation is applied to alleviate multimodality in the training dataset, using the AT demonstrator as the teachers (Kim and Rush, 2016). We replace the reference target sentence of each pair of training example (X, Y ) with a new target sentence Y ∗, which is generated from the teacher model(AT demonstrator). Then we use the new dataset (X, Y ∗) to train our NAT model. To avoid the redundancy of running fixed teacher models repeatedly on the same data, we decode the entire training set once using each teacher to create a new training dataset for its respective student. Model Settings We first train the AT demonstrator and then freeze its parameters during the training of imitate-NAT . In order to speed up the convergence of NAT training, we also initialize imitate-NAT with the corresponding parameters of the AT expert as they have similar architecture. For WMT14 En-De and WMT16 En-Ro, we use the hyperparameter settings of base Transformer model in Vaswani et al. (2017)(dmodel = 512, dhidden = 512, nlayer = 6 and nhead = 8). As in Gu et al. (2017); Lee et al. (2018), we use the small model (dmodel = 278, dhidden = 507, nlayer = 5 and nhead = 2) for IWSLT16 En-De. For sequence-level distillation, we set beam size to be 4. For imitate-NAT , we set the number of action category to 512 and found imitate-NAT is robust to the setting in our preliminary experiments. Length Parallel Decoding For inference, we follow the common practice of noisy parallel decoding (Gu et al., 2017), which generates a number of decoding candidates in parallel and selects the best translation via re-scoring using AT teacher. In our scenario, we first train a module to predict the target length as ˆT. However, due to the inherent uncertainty of the data itself, it is 1310 hard to accurately predict the target length. A reasonable solution is to generate multiple translation candidates by predicting different target length ∈ [ ˆT −∆T, ˆT +∆T] , which we called LPD (length parallel decoding). The model generates several outputs in parallel, then we use the pre-trained autoregressive model to identify the best overall translation. 5 Results and Analysis Competitor We include three NAT works as our competitors, the NAT with fertility (NAT-FT) (Gu et al., 2017), the NAT with iterative refinement (NAT-IR) (Lee et al., 2018) and the NAT with discrete latent variables (Kaiser et al., 2018). For all our tasks, we obtain the baseline performance by either directly using the performance figures reported in the previous works if they are available or producing them by using the open source implementation of baseline algorithms on our datasets. The results are shown in Table 1. 1. imitate-NAT significantly improved the quality of the translation with a large margin. On all the benchmark datasets, our imitate-NAT with LPD achieves the best translation performance, which is even close to the results of the autoregressive model, e.g. 30.68 vs. 30.85 on IWSLT16 En→De tasks, and 31.81vs. 32.59 on WMT16 Ro→En tasks. It is also worth mentioning that introducing the imitation module to AT demonstrator does not affect both the performance and the inference speed compared with the standard transformer model. 2. imitate-NAT Imitation learning plays an important role on bridging the gap between imitate-NAT and AT demonstrator Clearly, imitate-NAT leads to remarkable improvements over the competitor without imitation module (over almost 3 BLEU score on average). To make a fair comparison, the competitor follow exactly the same training steps with imitate-NAT , including the initialization, knowledge distillation, and Soft-Copy. The only difference comes from the imitation module. 3. imitate-NAT gets better latency. For NATFT, a big sample size(10 and 100) is required to get satisfied results, which seriously affects the inference speed of the model. Both NAT-FT and NAT-IR, the efficiency of models with refinement technique drop dramatically(15.6× →2.36× of NAT-FT and 8.9× →1.5× of NAT-IR). Our imitate-NAT gets even better performance with faster speed. The speedup compared with AT model is 9.7×. 5.1 Ablation Study To further study the effects brought by different techniques, we show in Table 2 the translation performance of different NAT model variants for the IWSLT16 En-De translation task. Soft-Copy v.s. Uniform-Copy The experimental results show that Soft-Copy is better than Uniform-Copy. Since Uniform-Copy employs a hard copy mechanism and directly copies the source embeddings without considering the global information, which increases the learning burden of the decoder. Our model takes the output of encoder as input and proposes a differentiable copy mechanism which gets much better results(25.34 vs. 20.71, see in line 3 and 2). Imitation Learning v.s. Non Imitation Learning The imitation learning method leads to an improvement of around 3 BLEU points(28.41 vs. 25.34, see line 6 and 3). NAT without IL degenerates into a normal NAT model. As discussed in section 1, current NAT approaches suffer from delayed supervisions (or rewards) and large search space in training. NAT decoder simultaneously generates all words of the translation, the search space of which is very large. Length Parallel Decoding Compared with the greedy beam search, LPD technique improves the performance around 2 BLEU points(30.68 vs. 28.41, from line 7 and 6). The observation is in consist with our intuition that sampling from the length space can improve the performance. Complementary with Knowledge Distillation In consist with previous work, NAT models achieved +4.2 BLEU score from sequence level knowledge distillation technique (see in row 1 and row 2). imitate-NAT without knowledge distillation obtained 23.56 BLEU score which is comparable to non-imitation NAT with knowledge distillation (see in row 3 and row 4). More importantly, we found that the imitation learning framework complemented with knowledge distillation perfectly. As shown in row 3 and 6, imitate-NAT substantially improves the performance of nonimitation NAT knowledge distillation up by +3.3 BLEU score. 1311 Distill UniformCopy SoftCopy LPD Imitation Learning BLEU 1 √ w/o 16.51 2 √ √ w/o 20.72 3 √ √ w/o 25.34 4 √ w/ 23.56 5 √ √ w/ 24.35 6 √ √ w/ 28.41 7 √ √ √ w/ 30.68 Table 2: Ablation study on the dev set of IWSLT16. w/ indicates with and w/o indicates without. LPD indicates length parallel decoding. Figure 4: Action category assignment distribution. Redistribute method leads to a more balanced distribution(blue), otherwise, it will be extremely unbalanced(red). Action Distribution Study One common problem in unsupervised clustering is that the results are unbalanced. In this paper, we call that an action is selected or activated when its probability in π(ot) is maximum. Then the space usage can be calculated by counting the number of times each action is selected. We evaluate the space usage on the development set of IWSLT16, and the results are presented in Figure 4. We greatly alleviate the problem of space usage through the category redistribution technique(Eq.7, Eq.8). When building the model without category redistribution, most of the space is not utilized, and the clustering results are concentrated in a few spatial locations, and the category information cannot be dynamically and flexibly characterized. In contrast, category redistribution makes the category distribution more balanced and more in line with the inherent rules of the language, so the clustering results can effectively guide the learning of the NAT model. 6 Related Work Gu et al. (2017) first developed a nonautoregressive NMT system which produces the outputs in parallel and the inference speed is thus significantly boosted. However, it comes at the cost that the translation quality is largely sacrificed since the intrinsic dependency within the natural language sentence is abandoned. A bulk of work has been proposed to mitigate such performance degradation. Lee et al. (2018) proposed a method of iterative refinement based on latent variable model and denoising autoencoder. Libovick and Helcl (2018) take NAT as a connectionist temporal classification problem, which achieved better latency. Kaiser et al. (2018) use discrete latent variables that makes decoding much more parallelizable. They first auto encode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from the shorter latent sequence in parallel. Guo et al. (2018) enhanced decoder input by introducing phrase table in SMT and embedding transformation. Wang et al. (2019) leverage the dual nature of translation tasks (e.g., English to German and German to English) and minimize a backward reconstruction error to ensure that the hidden states of the NAT decoder are able to recover the source side sentence. Unlike the previous work to modify the NAT architecture or decoder inputs, we introduce an imitation learning framework to close the performance gap between NAT and AT. To the best of our knowledge, it is the first time that imitation learning was applied to such problems. 7 Conclusion We propose an imitation learning framework for non-autoregressive neural machine translation to bridge the performance gap between NAT and AT. Specifically, We propose to employ a knowledgeable AT demonstrator to supervise every decoding state of NAT across different time steps and lay1312 ers. As a result, imitate-NAT leads to remarkable improvements and largely closes the performance gap between NAT and AT on several benchmark datasets. As a future work, we can try to improve the performance of the NMT by introducing more powerful demonstrator with different structure (e.g. right to left). Another direction is to apply the proposed imitation learning framework to similar scenarios such as simultaneous interpretation. Acknowledgement We thank the anonymous reviewers for their thoughtful comments. Xu Sun is the corresponding author of this paper. References Yoshua Bengio. 2009. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1–127. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP 2014, pages 1724–1734. Jiatao Gu, James Bradbury, Caiming Xiong, Victor O. K. Li, and Richard Socher. 2017. NonAutoregressive Neural Machine Translation. arXiv:1711.02281 [cs]. ArXiv: 1711.02281. Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, and Tie-Yan Liu. 2018. Non-autoregressive neural machine translation with enhanced decoder input. CoRR, abs/1812.09664. Michael Hahn and Frank Keller. 2016. Modeling human reading with neural attention. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 85– 95. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. ukasz Kaiser, Aurko Roy, Ashish Vaswani, Niki Parmar, Samy Bengio, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast Decoding in Sequence Models using Discrete Latent Variables. arXiv:1803.03382 [cs]. ArXiv: 1803.03382. Yoon Kim and Alexander M. Rush. 2016. SequenceLevel Knowledge Distillation. arXiv:1606.07947 [cs]. ArXiv: 1606.07947. Philip Koehn. 2010. Statistical machine translation. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In HLTNAACL. Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement. arXiv:1802.06901 [cs, stat]. ArXiv: 1802.06901. Jindich Libovick and Jindich Helcl. 2018. End-toEnd Non-Autoregressive Neural Machine Translation with Connectionist Temporal Classification. arXiv:1811.04719 [cs]. ArXiv: 1811.04719. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin A. Riedmiller. 2013. Playing atari with deep reinforcement learning. CoRR, abs/1312.5602. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-level control through deep reinforcement learning. Nature, 518:529–533. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL 2016. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In NIPS, 2014, pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR, abs/1706.03762. Chunqi Wang, Ji Zhang, and Haiqing Chen. 2018. Semi-Autoregressive Neural Machine Translation. arXiv:1808.08583 [cs]. ArXiv: 1808.08583. Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Non-Autoregressive Machine Translation with Auxiliary Regularization. arXiv e-prints, page arXiv:1902.10245. Junyuan Xie, Ross B. Girshick, and Ali Farhadi. 2016. Unsupervised deep embedding for clustering analysis. In ICML.
2019
125
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1313–1323 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1313 Monotonic Infinite Lookback Attention for Simultaneous Machine Translation Naveen Arivazhagan∗ Colin Cherry∗ Wolfgang Macherey Chung-Cheng Chiu Semih Yavuz Google navari,colincherry,wmach,[email protected] syavuz,rpang,mweili,[email protected] Ruoming Pang Wei Li Colin Raffel Abstract Simultaneous machine translation begins to translate each source sentence before the source speaker is finished speaking, with applications to live and streaming scenarios. Simultaneous systems must carefully schedule their reading of the source sentence to balance quality against latency. We present the first simultaneous translation system to learn an adaptive schedule jointly with a neural machine translation (NMT) model that attends over all source tokens read thus far. We do so by introducing Monotonic Infinite Lookback (MILk) attention, which maintains both a hard, monotonic attention head to schedule the reading of the source sentence, and a soft attention head that extends from the monotonic head back to the beginning of the source. We show that MILk’s adaptive schedule allows it to arrive at latency-quality trade-offs that are favorable to those of a recently proposed wait-k strategy for many latency values. 1 Introduction Simultaneous machine translation (MT) addresses the problem of how to begin translating a source sentence before the source speaker has finished speaking. This capability is crucial for live or streaming translation scenarios, such as speech-tospeech translation, where waiting for one speaker to complete their sentence before beginning the translation would introduce an intolerable delay. In these scenarios, the MT engine must balance latency against quality: if it acts before the necessary source content arrives, translation quality degrades; but waiting for too much source content can introduce unnecessary delays. We refer to the strategy an MT engine uses to balance reading source tokens against writing target tokens as its schedule. ∗Equal contributions. Recent work in simultaneous machine translation tends to fall into one of two bins: • The schedule is learned and/or adaptive to the current context, but assumes a fixed MT system trained on complete source sentences, as typified by wait-if-* (Cho and Esipova, 2016) and reinforcement learning approaches (Grissom II et al., 2014; Gu et al., 2017). • The schedule is simple and fixed and can thus be easily integrated into MT training, as typified by wait-k approaches (Dalvi et al., 2018; Ma et al., 2018). Neither scenario is optimal. A fixed schedule may introduce too much delay for some sentences, and not enough for others. Meanwhile, a fixed MT system that was trained to expect complete sentences may impose a low ceiling on any adaptive schedule that uses it. Therefore, we propose to train an adaptive schedule jointly with the underlying neural machine translation (NMT) system. Monotonic attention mechanisms (Raffel et al., 2017; Chiu and Raffel, 2018) are designed for integrated training in streaming scenarios and provide our starting point. They encourage streaming by confining the scope of attention to the most recently read tokens. This restriction, however, may hamper long-distance reorderings that can occur in MT. We develop an approach that removes this limitation while preserving the ability to stream. We use their hard, monotonic attention head to determine how much of the source sentence is available. Before writing each target token, our learned model advances this head zero or more times based on the current context, with each advancement revealing an additional token of the source sentence. A secondary, soft attention head can then attend to any source words at or before that point, resulting in Monotonic Infinite 1314 Lookback (MILk) attention. This, however, removes the memory constraint that was encouraging the model to stream. To restore streaming behaviour, we propose to jointly minimize a latency loss. The entire system can efficiently be trained in expectation, as a drop-in replacement for the familiar soft attention. Our contributions are as follows: 1. We present MILk attention, which allows us to build the first simultaneous MT system to learn an adaptive schedule jointly with an NMT model that attends over all source tokens read thus far. 2. We extend the recently-proposed Average Lagging latency metric (Ma et al., 2018), making it differentiable and calculable in expectation, which allows it to be used as a training objective. 3. We demonstrate favorable trade-offs to those of wait-k strategies at many latency values, and provide evidence that MILk’s advantage extends from its ability to adapt based on source content. 2 Background Much of the earlier work on simultaneous MT took the form of strategies to chunk the source sentence into partial segments that can be translated safely. These segments could be triggered by prosody (Fügen et al., 2007; Bangalore et al., 2012) or lexical cues (Rangarajan Sridhar et al., 2013), or optimized directly for translation quality (Oda et al., 2014). Segmentation decisions are surrogates for the core problem, which is deciding whether enough source content has been read to write the next target word correctly (Grissom II et al., 2014). However, since doing so involves discrete decisions, learning via back-propagation is obstructed. Previous work on simultaneous NMT has thus far side-stepped this problem by making restrictive simplifications, either on the underlying NMT model or on the flexibility of the schedule. Cho and Esipova (2016) apply heuristics measures to estimate and then threshold the confidence of an NMT model trained on full sentences to adapt it at inference time to the streaming scenario. Several others use reinforcement learning (RL) to develop an agent to predict read and write decisions (Satija and Pineau, 2016; Gu et al., 2017; Alinejad et al., 2018). However, due to computational challenges, they pre-train an NMT model on full sentences and then train an agent that sees the fixed NMT model as part of its environment. Dalvi et al. (2018) and Ma et al. (2018) use fixed schedules and train their NMT systems accordingly. In particular, Ma et al. (2018) advocate for a wait-k strategy, wherein the system always waits for exactly k tokens before beginning to translate, and then alternates between reading and writing at a constant pre-specified emission rate. Due to the deterministic nature of their schedule, they can easily train the NMT system with the schedule in place. This can allow the NMT system to learn to anticipate missing content using its inherent language modeling capabilities. On the downside, with a fixed schedule the model cannot speed up or slow down appropriately for particular inputs. Press and Smith (2018) recently developed an attention-free model that aims to reduce computational and memory requirements. They achieve this by maintaining a single running context vector, and eagerly emitting target tokens based on it whenever possible. Their method is adaptive and uses integrated training, but the schedule itself is trained with external supervision provided by word alignments, while ours is latent and learned in service to the MT task. 3 Methods In sequence-to-sequence modeling, the goal is to transform an input sequence x = {x1, . . . , x|x|} into an output sequence y = {y1, . . . , y|y|}. A sequence-to-sequence model consists of an encoder which maps the input sequence to a sequence of hidden states and a decoder which conditions on the encoder output and autoregressively produces the output sequence. In this work, we consider sequence-to-sequence models where the encoder and decoder are both recurrent neural networks (RNNs) and are updated as follows: hj = EncoderRNN(xj, hj−1) (1) si = DecoderRNN(yi−1, si−1, ci) (2) yi = Output(si, ci) (3) where hj is the encoder state at input timestep j, si is the decoder state at output timestep i, and ci is a context vector. The context vector is computed based on the encoder hidden states through the use of an attention mechanism (Bahdanau et al., 1315 Output y (a) Soft attention. Encoder states h (b) Monotonic attention. Output y (c) MILk attention. Figure 1: Simplified diagrams of the attention mechanisms discussed in Sections 3.1 and 3.2. The shading of each node indicates the amount of attention weight the model assigns to a given encoder state (horizontal axis) at a given output timestep (vertical axis). 2014). The function Output(·) produces a distribution over output tokens yi given the current state si and context vector ci. In standard soft attention, the context vector is computed as follows: ei,j = Energy(hj, si−1) (4) αi,j = softmax(ei,:)j := exp(ei,j) PT k=1 exp(ei,k) (5) ci = |x| X j=1 αi,jhj (6) where Energy() is a multi-layer perceptron. One issue with standard soft attention is that it computes ci based on the entire input sequence for all output timesteps; this prevents attention from being used in streaming settings since the entire input sequence needs to be ingested before generating any output. To enable streaming, we require a schedule in which the output at timestep i is generated using just the first ti input tokens, where 1 ≤ti ≤|x|. 3.1 Monotonic Attention Raffel et al. (2017) proposed a monotonic attention mechanism that modifies standard soft attention to provide such a schedule of interleaved reads and writes, while also integrating training with the rest of the NMT model. Monotonic attention explicitly processes the input sequence in a left-to-right order and makes a hard assignment of ci to one particular encoder state denoted hti. For output timestep i, the mechanism begins scanning the encoder states starting at j = ti−1. For each encoder state, it produces a Bernoulli selection probability pi,j, which corresponds to the probability of either stopping and setting ti = j, or else moving on to the next input timestep, j +1, which represents reading one more source token. This selection probability is computed through the use of an energy function that is passed through a logistic sigmoid to parameterize the Bernoulli random variable: ei,j = MonotonicEnergy(si−1, hj) (7) pi,j = σ(ei,j) (8) zi,j ∼Bernoulli(pi,j) (9) If zi,j = 0, j is incremented and these steps are repeated; if zi,j = 1, ti is set to j and ci is set to hti. This approach involves sampling a discrete random variable and a hard assignment of ci = hti, which precludes backpropagation. Raffel et al. (2017) instead compute the probability that ci = hj and use this to compute the expected value of ci, which can be used as a drop-in replacement for standard soft attention, and which allows for training with backpropagation. The probability that the attention mechanism attends to state hj at output timestep i is computed as αi,j = pi,j  (1 −pi,j−1)αi,j−1 pi,j−1 + αi−1,j  (10) There is a solution to this recurrence relation which allows αi,j to be computed for all j in parallel using cumulative sum and cumulative product operations; see Raffel et al. (2017) for details. Note that when pi,j is either 0 or 1, the soft and hard approaches are the same. To encourage this, Raffel et al. (2017) use the common approach of adding zero-mean Gaussian noise to the logistic sigmoid function’s activations. Equation 8 becomes: pi,j = σ (ei,j + N(0, n)) (11) 1316 One can control the extent to which pi,j is drawn toward discrete values by adjusting the noise variance n. At run time, we forgo sampling in favor of simply setting zi,j = 1(ei,j > 0). While the monotonic attention mechanism allows for streaming attention, it requires that the decoder attend only to a single encoder state, hti. To address this issue, Chiu and Raffel (2018) proposed monotonic chunkwise attention (MoChA), which allows the model to perform soft attention over a small fixed-length chunk preceding ti, i.e. over all available encoder states, hti−cs+1, hti−cs+2, . . . , hti for some fixed chunk size cs. 3.2 Monotonic Infinite Lookback Attention In this work, we take MoChA one step further, by allowing the model to perform soft attention over the encoder states h1, h2, . . . , hti. This gives the model “infinite lookback” over the past seen thus far, so we dub this technique Monotonic Infinite Lookback (MILk) attention. The infinite lookback provides more flexibility and should improve the modeling of long-distance reorderings and dependencies. The increased computational cost, from linear to quadratic computation, is of little concern as our focus on the simultaneous scenario means that out largest source of latency will be waiting for source context. Concretely, we maintain a full monotonic attention mechanism and also a soft attention mechanism. Assuming that the monotonic attention component chooses to stop at ti, MILk first computes soft attention energies ui,k = SoftmaxEnergy(hk, si−1) (12) for k ∈1, 2, . . . , ti where SoftmaxEnergy(·) is an energy function similar to Equation (4). Then, MILk computes a context ci by ci = ti X j=1 exp(ui,j) Pti l=1 exp(ui,l) hj (13) Note that a potential issue with this approach is that the model can set the monotonic attention head ti = |x| for all i, in which case the approach is equivalent to standard soft attention. We address this issue in the following subsection. To train models using MILk, we compute the expected value of ci given the monotonic attention probabilities and soft attention energies. To do so, we must consider every possible path through which the model could assign attention to a given encoder state. Specifically, we can compute the attention distribution induced by MILk by βi,j = |x| X k=j αi,k exp(ui,j) Pk l=1 exp(ui,l) ! (14) The first summation reflects the fact that hj can influence ci as long as k ≥j, and the term inside the summation reflects the attention probability associated with some monotonic probability αi,k and the soft attention distribution. This calculation can be computed efficiently using cumulative sum operations by replacing the outer summation with a cumulative sum and the inner operation with a cumulative sum after reversing u. Once we have the βi,j distribution, calculating the expected context ci follows a familiar formula: ci = P|x| j=1 βi,jhj. 3.3 Latency-augmented Training By moving to an infinite lookback, we have gained the full power of a soft attention mechanism over any source tokens that have been revealed up to time ti. However, while the original monotonic attention encouraged streaming behaviour implicitly due to the restriction on the system’s memory, MILk no longer has any incentive to do this. It can simply wait for all source tokens before writing the first target token. We address this problem by training with an objective that interpolates log likelihood with a latency metric. Sequence-to-sequence models are typically trained to minimize the negative log likelihood, which we can easily augment with a latency cost: L(θ) = − X (x,y) log p(y|x; θ) + λC(g) (15) where λ is a user-defined latency weight, g = {g1, . . . , g|y|} is a vector that describes the delay incurred immediately before each target time step (see Section 4.1), and C is a latency metric that transforms these delays into a cost. In the case of MILk, gi is equal to ti, the position of the monotonic attention head.1 Recall that during training, we never actually make a hard decision about ti’s location. Instead, we can use αi,j, 1We introduce gi to generalize beyond methods with hard attention heads and to unify notation with Ma et al. (2018). 1317 the probability that ti = j, to get expected delay: gi = |x| X j=1 jαi,j (16) So long as our metric is differentiable and welldefined over fractional delays, Equation (15) can be used to guide MILk to low latencies. 3.4 Preserving Monotonic Probability Mass In the original formulations of monotonic attention (see Section 3.1), it is possible to choose not to stop the monotonic attention head, even at the end of the source sentence. In such cases, the attention returns an all-zero context vector. In early experiments, we found that this creates an implicit incentive for low latencies: the MILk attention head would stop early to avoid running off the end of the sentence. This implicit incentive grows stronger as our selection probabilities pi,j come closer to being binary decisions. Meanwhile, we found it beneficial to have very-near-tobinary decisions in order to get accurate latency estimates for latency-augmented training. Taken all together, we found that MILk either destabilized, or settled into unhealthily-low-latency regions. We resolve this problem by forcing MILk’s monotonic attention head to once stop when it reaches the EOS token, by setting pi,|x| = 1.2 4 Measuring Latency Our plan hinges on having a latency cost that is worth optimizing. To that end, we describe two candidates, and then modify the most promising one to accommodate our training scenario. 4.1 Previous Latency Metrics Cho and Esipova (2016) introduced Average Proportion (AP), which averages the absolute delay incurred by each target token: AP = 1 |x| |y| |y| X i=1 gi (17) 2While training, we perform the equivalent operation of shifting the any residual probability mass from overshooting the source sentence, 1 −P|x| j=1 αi,j, to the final source token at position |x|. This bypasses floating point errors introduced by the parallelized cumulative sum and cumulative product operations (Raffel et al., 2017). This same numerical instability helps explain why the parameterized stopping probability pi,j does not learn to detect the end of the sentence without intervention. where gi is delay at time i: the number of source tokens read by the agent before writing the ith target token. This metric has some nice properties, such as being bound between 0 and 1, but it also has some issues. Ma et al. (2018) observe that their wait-k system with a fixed k = 1 incurs different AP values as sequence length |x| = |y| ranges from 2 (AP = 0.75) to ∞(AP = 0.5). Knowing that a very-low-latency wait-1 system incurs at best an AP of 0.5 also implies that much of the metric’s dynamic range is wasted; in fact, Alinejad et al. (2018) report that AP is not sufficiently sensitive to detect their improvements to simultaneous MT. Recently, Ma et al. (2018) introduced Average Lagging (AL), which measures the average rate by which the MT system lags behind an ideal, completely simultaneous translator: AL = 1 τ τ X i=1 gi −i −1 γ (18) where τ is the earliest timestep where the MT system has consumed the entire source sequence: τ = argminigi = |x| (19) and γ = |y|/|x| accounts for the source and target having different sequence lengths. This metric has the nice property that when |x| = |y|, a wait-k system will achieve an AL of k, which makes the metric very interpretable. It also has no issues with sentence length or sensitivity. 4.2 Differentiable Average Lagging Average Proportion already works as a C function, but we prefer Average Lagging for the reasons outlined above. Unfortunately, it is not differentiable, nor is it calculable in expectation, due to the argmin in Equation (19). We present Differentiable Average Lagging (DAL), which eliminates the argmin by making AL’s treatment of delay internally consistent. AL’s argmin is used to calculate τ, which is used in turn to truncate AL’s average at the point where all source tokens have been read. Why is this necessary? We can quickly see τ’s purpose by reasoning about a simpler version of AL where τ = |y|. Table 1 shows the time-indexed lags that are averaged to calculate AL for a wait-3 system. The lags make the problem clear: each position beyond the point where all source tokens have been read (gi = |x|) has its lag reduced by 1318 Statistics Scores i 1 2 3 4 τ = 2 τ = |y| gi 3 4 4 4 ALi 3 3 2 1 AL = 3 AL = 2.25 Table 1: Comparing AL with and without its truncated average, tracking time-indexed lag ALi = gi −i−1 γ when |x| = |y| = 4 for a wait-3 system. 1, pulling the average lag below k. By stopping its average at τ = 2, AL maintains the property that a wait-k system receives an AL of k. τ is necessary because the only way to incur delay is to read a source token. Once all source tokens have been read, all target tokens appear instantaneously, artificially dragging down the average lag. This is unsatisfying: the system lagged behind the source speaker while they were speaking. It should continue to do so after they finished. AL solves this issue by truncating its average, enforcing an implicit and poorly defined delay for the excluded, problematic tokens. We propose instead to enforce a minimum delay for writing any target token. Specifically, we model each target token as taking at least 1 γ units of time to write, mirroring the speed of the ideal simultaneous translator in AL’s Equation (18). We wrap g in a g′ that enforces our minimum delay: g′ i =  gi i = 1 max gi, g′ i−1 + 1 γ  i > 1 (20) Like gi, g′ i represents the amount of delay incurred just before writing the ith target token. Intuitively, the max enforces our minimum delay: g′ i is either equal to gi, the number of source tokens read, or to g′ i−1 + 1 γ , the delay incurred just before the previous token, plus the time spent writing that token. The recurrence ensures that we never lose track of earlier delays. With g′ in place, we can define our Differentiable Average Lagging: DAL = 1 |y| |y| X i=1 g′ i −i −1 γ (21) DAL is equal to AL in many cases, in particular, when measuring wait-k systems for sentences of equal length, both always return a lag of k. See Table 2 for its treatment of our wait-3 example. Having eliminated τ, DAL is both differentiable and calcuable in expectation. Cherry and Foster (2019) provide further motivation and analysis for Statistics Scores i 1 2 3 4 g′ i 3 4 5 6 DALi 3 3 3 3 DAL = 3 Table 2: DAL’s time-indexed lag DALi = g′ i −i−1 γ when |x| = |y| = 4 for a wait-3 system. DAL, alongside several examples of cases where DAL yields more intuitive results than AL. 5 Experiments We run our experiments on the standard WMT14 English-to-French (EnFr; 36.3M sentences) and WMT15 German-to-English (DeEn; 4.5M sentences) tasks. For EnFr we use a combination of newstest 2012 and newstest 2013 for development and report results on newstest 2014. For DeEn we validate on newstest 2013 and then report results on newstest 2015. Translation quality is measured using detokenized, cased BLEU (Papineni et al., 2002). For each data set, we use BPE (Sennrich et al., 2016) on the training data to construct a 32,000-type vocabulary that is shared between the source and target languages. 5.1 Model Our model closely follows the RNMT+ architecture described by Chen et al. (2018) with modifications to support streaming translation. It consists of a 6 layer LSTM encoder and an 8 layer LSTM decoder with additive attention (Bahdanau et al., 2014). All streaming models including waitk, MoChA and MILk use unidirectional encoders, while offline translation models use a bidirectional encoder. Both encoder and decoder LSTMs have 512 hidden units, per gate layer normalization (Ba et al., 2016), and residual skip connections after the second layer. The models are regularized using dropout with probability 0.2 and label smoothing with an uncertainty of 0.1 (Szegedy et al., 2016). Models are optimized until convergence using data parallelism over 32 P100s, using Adam (Kingma and Ba, 2015) with the learning rate schedule described in Chen et al. (2018) and a batch size of 4,096 sentence-pairs per GPU. Checkpoints are selected based on development loss. All streaming models use greedy decoding, while offline models use beam search with a beam size of 20. We implement soft attention, monotonic attention, MoChA, MILk and wait-k as instantiations 1319 unpreserved preserved λ BLEU DAL BLEU DAL 0.0 27.7 21.0 27.7 27.9 0.1 27.0 13.6 27.6 10.5 0.2 25.7 11.6 27.5 8.7 Table 3: Varying MILk’s λ with and without mass preservation on the DeEn development set. n BLEU DAL 0 3.4 24.2 1 10.8 12.9 2 24.6 12.3 3 27.5 10.4 4 27.5 8.7 6 26.3 7.2 Table 4: Varying MILk’s discreteness parameter n with λ fixed at 0.2 on the DeEn development set. of an attention interface in a common code base, allowing us to isolate their contributions. By analyzing development sentence lengths, we determined that wait-k should employ a emission rate of 1 for DeEn, and 1.1 for EnFr. 5.2 Development We tuned MILk on our DeEn development set. Two factors were crucial for good performance: the preservation of monotonic mass (Section 3.4), and the proper tuning of the noise parameter n in Equation 11, which controls the discreteness of monotonic attention probabilities during training. Table 3 contrasts MILk’s best configuration before mass preservation against our final system. Before preservation, MILk with a latency weight λ = 0 still showed a substantial reduction in latency from the maximum value of 27.9, indicating an intrinsic latency incentive. Furthermore, training quickly destabilized, resulting in very poor trade-offs for λs as low as 0.2. After modifying MILk to preserve mass, we then optimized noise with λ fixed at a low but relevant value of 0.2, as shown in Table 4. We then proceeded the deploy the selected value of n = 4 for testing both DeEn and EnFr. 5.3 Comparison with the state-of-the-art We compare MILk to wait-k, the current stateof-the-art in simultaneous NMT. We also include MILk’s predecessors, Monotonic Attention and MoChA, which have not previously been evaluFigure 2: Quality-latency comparison for Germanto-English WMT15 (DeEn) with DAL (upper), AL (lower-left), AP (lower-right). ated with latency metrics. We plot latency-quality curves for each system, reporting quality using BLEU, and latency using Differentiable Average Lagging (DAL), Average Lagging (AL) or Average Proportion (AP) (see Section 4). We focus our analysis on DAL unless stated otherwise. MILk curves are produced by varying the latency loss weight λ,3 wait-k curves by varying k,4 and MoChA curves by varying chunk size.5 Both MILk and wait-k have settings (λ = 0 and k = 300) corresponding to full attention. Results are shown in Figures 2 and 3.6 For DeEn, we begin by noting that MILk has a clear separation above its predecessors MoChA and Monotonic Attention, indicating that the infinite lookback is indeed a better fit for translation. Furthermore, MILk is consistently above wait-k for lags between 4 and 14 tokens. MILk is able to retain the quality of full attention (28.4 BLEU) up to a lag of 8.5 tokens, while wait-k begins to fall off for lags below 13.3 tokens. At the lowest comparable latency (4 tokens), MILk is 1.5 BLEU points 3λ = 0.75, 0.5, 0.4, 0.3, 0.2, 0.1, 0.05, 0.01, 0.0 4k = 2, 4, 6, 8, 10, 12, 14, 16, 20, 24, 300 5cs = 1 (Monotonic Attention), 2, 4, 8, and 16 6Full sized graphs for all latency metrics, along with the corresponding numeric scores are available in Appendix A, included as supplementary material. 1320 Figure 3: Quality-latency comparison for English-toFrench WMT14 (EnFr) with DAL (upper), AL (lowerleft), AP (lower-right). ahead of wait-k. EnFr is a much easier language pair: both MILk and wait-k maintain the BLEU of full attention at lags of 10 tokens. However, we were surprised to see that this does not mean we can safely deploy very low ks for wait-k; its quality drops off surprisingly quickly at k = 8 (DAL=8.4, BLEU=39.8). MILk extends the flat “safe” region of the curve out to a lag of 7.2 (BLEU=40.5). At the lowest comparable lag (4.5 tokens), MILk once again surpasses wait-k, this time by 2.3 BLEU points. The k = 2 point for wait-k has been omitted from all graphs to improve clarity. The omitted BLEU/DAL pairs are 19.5/2.5 for DeEn and 28.9/2.9 for EnFr, both of which trade very large losses in BLEU for small gains in lag. However, wait-k’s ability to function at all at such low latencies is notable. The configuration of MILk tested here was unable to drop below lags of 4. Despite MILk having been optimized for DAL, MILk’s separation above wait-k only grows as we move to the more established metrics AL and AP. DAL’s minimum delay for each target token makes it far more conservative than AL or AP. Unlike DAL, these metrics reward MILk and its predecessors for their tendency to make many consecutive writes in the middle of a sentence. Figure 4: Two EnFr sentences constructed to contrast MILk’s handling of a short noun phrase John Smith against the longer John Smith’s lawyer. Translated by MILk with λ = 0.2. 5.4 Characterizing MILK’s schedule We begin with a qualitative characterization of MILk’s behavior by providing diagrams of MILk’s attention distributions. The shade of each circle indicates the strength of the soft alignment, while bold outlines indicate the location of the hard attention head, whose movement is tracked by connecting lines. In general, the attention head seems to loosely follow noun- and verb-phrase boundaries, reading one or two tokens past the end of the phrase to ensure it is complete. This behavior and its benefits are shown in Figure 4, which contrast the simple noun phrase John Smith against the more complex John Smith’s laywer. By waiting until the end of both phrases, MILk is able to correctly re-order avocat (lawyer). Figure 5 shows a more complex sentence drawn 1321 Figure 5: An example EnFr sentence drawn from our development set, as translated by MILk with λ = 0.2. Figure 6: An example EnFr sentence drawn from our development set, as translated by wait-6. from our development set. MILk gets going after reading just 4 tokens, writing the relatively safe, En 2008. It does wait, but it saves its pauses for tokens with likely future dependencies. A particularly interesting pause occurs before the de in de la loi. This preposition could be either de la or du, depending on the phrase it modifies. We can see MILk pause long enough to read one token after law, allowing it to correctly choose de la to match the feminine loi (law). Looking at the corresponding wait-6 run in Figure 6, we can see that wait-6’s fixed schedule does not read law before writing the same de. To its credit, wait-6 anticipates correctly, also choosing de la, likely due to the legal context provided by the nearby phrase, the constitutionality. We can also perform a quantitative analysis of Figure 7: Histogram of initial delays for MILk (λ = 0.2) and wait-6 on the EnFr development set. MILk’s adaptivity by monitoring its initial delays; that is, how many source tokens does it read before writing its first target token? We decode our EnFr development set with MILk λ = 0.2 as well as wait-6 and count the initial delays for each.7 The resulting histogram is shown in Figure 7. We can see that MILk has a lot of variance in its initial delays, especially when compared to the near-static wait-6. This is despite them having very similar DALs: 5.8 for MILk and 6.5 for wait-6. 6 Conclusion We have presented Monotonic Infinite Lookback (MILk) attention, an attention mechanism that uses a hard, monotonic head to manage the reading of the source, and a soft traditional head to attend over whatever has been read. This allowed us to build a simultaneous NMT system that is trained jointly with its adaptive schedule. Along the way, we contributed latency-augmented training and a differentiable latency metric. We have shown MILk to have favorable quality-latency trade-offs compared to both wait-k and to earlier monotonic attention mechanisms. It is particularly useful for extending the length of the region on the latency curve where we do not yet incur a major reduction in BLEU. References Ashkan Alinejad, Maryam Siahbani, and Anoop Sarkar. 2018. Prediction improves simultaneous neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3022–3027. Association for Computational Linguistics. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer Normalization. arXiv e-prints, page arXiv:1607.06450. 7Wait-6 will have delays different from 6 only for source sentences with fewer than 6 tokens. 1322 Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Srinivas Bangalore, Vivek Kumar Rangarajan Sridhar, Prakash Kolan, Ladan Golipour, and Aura Jimenez. 2012. Real-time incremental speech-tospeech translation of dialogs. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 437–445. Association for Computational Linguistics. Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Mike Schuster, Noam Shazeer, Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 76–86. Association for Computational Linguistics. Colin Cherry and George Foster. 2019. Thinking Slow about Latency Evaluation for Simultaneous Machine Translation. arXiv e-prints, page arXiv:1906.00048. Chung-Cheng Chiu and Colin Raffel. 2018. Monotonic chunkwise attention. In International Conference on Learning Representations. Kyunghyun Cho and Masha Esipova. 2016. Can neural machine translation do simultaneous translation? CoRR, abs/1606.02012. Fahim Dalvi, Nadir Durrani, Hassan Sajjad, and Stephan Vogel. 2018. Incremental decoding and training methods for simultaneous translation in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 493–499. Association for Computational Linguistics. Christian Fügen, Alex Waibel, and Muntsin Kolss. 2007. Simultaneous translation of lectures and speeches. Machine Translation, 21(4):209–252. Alvin Grissom II, He He, Jordan Boyd-Graber, John Morgan, and Hal Daumé III. 2014. Don’t until the final verb wait: Reinforcement learning for simultaneous machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1342–1352. Association for Computational Linguistics. Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor O.K. Li. 2017. Learning to translate in real-time with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1053–1062. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. Mingbo Ma, Liang Huang, Hao Xiong, Kaibo Liu, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, and Haifeng Wang. 2018. STACL: Simultaneous Translation with Integrated Anticipation and Controllable Latency. arXiv e-prints, page arXiv:1810.08398. Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2014. Optimizing segmentation strategies for simultaneous speech translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 551–556. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Ofir Press and Noah A. Smith. 2018. You May Not Need Attention. arXiv e-prints, page arXiv:1810.13409. Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, and Douglas Eck. 2017. Online and lineartime attention by enforcing monotonic alignments. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2837–2846, International Convention Centre, Sydney, Australia. PMLR. Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, Andrej Ljolje, and Rathinavelu Chengalvarayan. 2013. Segmentation strategies for streaming speech translation. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 230–238. Association for Computational Linguistics. Harsh Satija and Joelle Pineau. 2016. Simultaneous machine translation using deep reinforcement learning. In Proceedings of the Abstraction in Reinforcement Learning Workshop. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725. Association for Computational Linguistics. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818–2826. 1323 Supplementary Material We have provided a separate file containing supplementary material. Its Appendix A contains fullsized graphs and numeric scores to support our primary experimental comparison in Section 5.3.
2019
126
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1324–1330 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1324 Global Textual Relation Embedding for Relational Understanding Zhiyu Chen1, Hanwen Zha1, Honglei Liu1, Wenhu Chen1, Xifeng Yan1, and Yu Su2 1University of California, Santa Barbara, CA, USA 2The Ohio State University, OH, USA {zhiyuchen, hwzha, honglei, wenhuchen, xyan}@cs.ucsb.edu, [email protected] Abstract Pre-trained embeddings such as word embeddings and sentence embeddings are fundamental tools facilitating a wide range of downstream NLP tasks. In this work, we investigate how to learn a general-purpose embedding of textual relations, defined as the shortest dependency path between entities. Textual relation embedding provides a level of knowledge between word/phrase level and sentence level, and we show that it can facilitate downstream tasks requiring relational understanding of the text. To learn such an embedding, we create the largest distant supervision dataset by linking the entire English ClueWeb09 corpus to Freebase. We use global co-occurrence statistics between textual and knowledge base relations as the supervision signal to train the embedding. Evaluation on two relational understanding tasks demonstrates the usefulness of the learned textual relation embedding. The data and code can be found at https://github.com/czyssrs/GloREPlus 1 Introduction Pre-trained embeddings such as word embeddings (Mikolov et al., 2013; Pennington et al., 2014; Peters et al., 2018; Devlin et al., 2018) and sentence embeddings (Le and Mikolov, 2014; Kiros et al., 2015) have become fundamental NLP tools. Learned with large-scale (e.g., up to 800 billion tokens (Pennington et al., 2014)) open-domain corpora, such embeddings serve as a good prior for a wide range of downstream tasks by endowing task-specific models with general lexical, syntactic, and semantic knowledge. Inspecting the spectrum of granularity, a representation between lexical (and phrasal) level and sentence level is missing. Many tasks require relational understanding of the entities mentioned in the text, e.g., relation extraction and knowledge base completion. Textual relation (Bunescu and Mooney, 2005), defined as the shortest path between two entities in the dependency parse tree of a sentence, has been widely shown to be the main bearer of relational information in text and proved effective in relation extraction tasks (Xu et al., 2015; Su et al., 2018). If we can learn a general-purpose embedding for textual relations, it may facilitate many downstream relational understanding tasks by providing general relational knowledge. Similar to language modeling for learning general-purpose word embeddings, distant supervision (Mintz et al., 2009) is a promising way to acquire supervision, at no cost, for training general-purpose embedding of textual relations. Recently Su et al. (2018) propose to leverage global co-occurrence statistics of textual and KB relations to learn embeddings of textual relations, and show that it can effectively combat the wrong labeling problem of distant supervision (see Figure 1 for example). While their method, named GloRE, achieves the state-of-the-art performance on the popular New York Times (NYT) dataset (Riedel et al., 2010), the scope of their study is limited to relation extraction with smallscale in-domain training data. In this work, we take the GloRE approach further and apply it to large-scale, domainindependent data labeled with distant supervision, with the goal of learning general-purpose textual relation embeddings. Specifically, we create the largest ever distant supervision dataset by linking the entire English ClueWeb09 corpus (half a billion of web documents) to the latest version of Freebase (Bollacker et al., 2008), which contains 45 million entities and 3 billion relational facts. After filtering, we get a dataset with over 5 million unique textual relations and around 9 million cooccurring textual and KB relation pairs. We then train textual relation embedding on the collected 1325 Henry_Ford founded Ford_Motor_Company Ford_Motor_Company, named after Henry_Ford nsubj dobj acl nmod:after Textual Relations Knowledge Base Relations Ford_Motor_Company Henry_Ford founder Ford_Motor_Company Henry_Ford named after dobj ←−−founded nsubj −−−→ acl −−→named nmod:after −−−−−−−→ founder 2468.0 24.0 named after 305.0 347.0 ... ... ... Figure 1: Left: The wrong labeling problem of distant supervision. The Ford Motor Company is both founded by and named after Henry Ford. The KB relation founder and named after are thus both mapped to all of the sentences containing the entity pair, resulting in many wrong labels (red dashed arrows). Right: Global co-occurrence statistics from our distant supervision dataset, which clearly distinguishes the two textual relations. dataset in a way similar to (Su et al., 2018), but using Transformer (Vaswani et al., 2017) instead of vanilla RNN as the encoder for better training efficiency. To demonstrate the usefulness of the learned textual relation embedding, we experiment on two relational understanding tasks, relation extraction and knowledge base completion. For relation extraction, we use the embedding to augment PCNN+ATT (Lin et al., 2016) and improve the precision for top 1000 predictions from 83.9% to 89.8%. For knowledge base completion, we replace the neural network in (Toutanova et al., 2015) with our pre-trained embedding followed by a simple projection layer, and gain improvements on both MRR and HITS@10 measures. Our major contributions are summarized as following: • We propose the novel task of learning general-purpose embedding of textual relations, which has the potential to facilitate a wide range of relational understanding tasks. • To learn such an embedding, we create the largest distant supervision dataset by linking the entire English ClueWeb09 corpus to Freebase. The dataset is publicly available1. • Based on the global co-occurrence statistics of textual and KB relations, we learn a textual relation embedding on the collected dataset and demonstrate its usefulness on relational understanding tasks. 2 Related Work Distant supervision methods (Mintz et al., 2009) for relation extraction have been studied by a number of works (Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012; Zeng et al., 2015; Lin et al., 2016; Ji et al., 2017; Wu et al., 2017). (Su et al., 2018) use global co-occurrence statistics of 1https://github.com/czyssrs/GloREPlus textual and KB relations to effectively combat the wrong labeling problem. But the global statistics in their work is limited to NYT dataset, capturing domain-specific distributions. Another line of research that relates to ours is the universal schema (Riedel et al., 2013) for relation extraction, KB completion, as well as its extensions (Toutanova et al., 2015; Verga et al., 2016). Wrong labeling problem still exists since their embedding is learned based on individual relation facts. In contrast, we use the global cooccurrence statistics as explicit supervision signal. 3 Textual Relation Embedding In this section, we describe how to collect largescale data via distant supervision (§3.1) and train the textual relation embedding (§3.2). 3.1 Global Co-Occurrence Statistics from Distant Supervision To construct a large-scale distant supervision dataset, we first get the English ClueWeb09 corpus (Callan et al., 2009), which contains 500 million web documents. We employ the FACC1 dataset (Gabrilovich et al., 2013) to map ClueWeb09 to Freebase. We identify over 5 billion entity mentions in ClueWeb09 and link them to Freebase entities. From the linked documents, we extract 155 million sentences containing at least two entity mentions. We then use the Stanford Parser (Chen and Manning, 2014) with universal dependencies to extract textual relations (shortest dependency paths) between each pair of entity mentions2, leading to 788 million relational triples (subject, textual relation, object), of which 451 million are unique. Following (Su et al., 2018), we then collect the global co-occurrence statistics of textual and KB relations. More specifically, for a relational triple (e1, t, e2) with textual relation t, if (e1, r, e2) with 2To be more precise, only shortest dependency paths without any other entity on the path are extracted. 1326 KB relation r exists in the KB, then we count it as a co-occurrence of t and r. We count the total number of co-occurrences of each pair of textual and KB relation across the entire corpus. We then normalize the global co-occurrence statistics such that each textual relation has a valid probability distribution over all the KB relations, which presumably captures the semantics of the textual relation. In the end, a bipartite relation graph is constructed, with one node set being the textual relations, the other node set being the KB relations, and the weighted edges representing the normalized global co-occurrence statistics. Filtering. When aligning the text corpus with the KB, we apply a number of filters to ensure data quality and training efficiency: (1) We only use the KB relations in Freebase Commons, 70 domains that are manually verified to be of release quality. (2) Only textual relations with the number of tokens (including both lexical tokens and dependency relations) less than or equal to 10 are kept. (3) Only non-symmetric textual relations are kept, because symmetric ones are typically from conjunctions like ”and” or ”or”, which are less of interest. (4) Only textual relations with at least two occurrences are kept. After filtering, we end up with a relation graph with 5,559,176 unique textual relations, 1,925 knowledge base (KB) relations, and 8,825,731 edges with non-zero weight. It is worth noting that these filters are very conservative, and we can easily increase the scale of data by relaxing some of the filters. 3.2 Embedding Training Considering both effectiveness and efficiency, we employ the Transformer encoder (Vaswani et al., 2017) to learn the textual relation embedding. It has been shown to excel at learning generalpurpose representations (Devlin et al., 2018). The embedded textual relation token sequence is fed as input. For example, for the textual relation dobj ←−−founded nsubj −−−→, the input is the embedded sequence of {< −dobj >, founded, < nsubj >}. We project the output of the encoder to a vector z as the result embedding. Given a textual relation ti and its embedding zi, denote {r1, r2, ..., rn} as all KB relations, and ˜p(rj|ti) as the global co-occurrence distribution, the weight of the edge between textual relation ti and KB relation rj in the relation graph. The training objective is to minimize the cross-entropy loss: L = − X i,j ˜p(rj|ti)log(p(rj|ti)), (1) Where p(rj|ti) = (softmax(Wzi + b))j. (2) W and b are trainable parameters. We use the filtered relation graph in §3.1 as our training data. To guarantee that the model generalizes to unseen textual relations, we take 5% of the training data as validation set. Word embeddings are initialized with the GloVe (Pennington et al., 2014) vectors3. Dependency relation embeddings are initialized randomly. For the Transformer model, we use 6 layers and 6 attention heads for each layer. We use the Adam optimizer (Kingma and Ba, 2015) with parameter settings suggested by the original Transformer paper (Vaswani et al., 2017). We train a maximum number of 200 epochs and take the checkpoint with minimum validation loss for the result. We also compare with using vanilla RNN in GloRE (Su et al., 2018). Denote the embedding trained with Tranformer as GloRE++, standing for both new data and different model, and with RNN as GloRE+, standing for new data. We observe that, in the early stage of training, the validation loss of RNN decreases faster than Transformer. However, it starts to overfit soon. 4 Experiments In this section, we evaluate the usefulness of the learned textual relation embedding on two popular relational understanding tasks, relation extraction and knowledge base completion. We do not fine-tune the embedding, and only use in-domain data to train a single feedforward layer to project the embedding to the target relations of the domain. We compare this with models that are specifically designed for those tasks and trained using in-domain data. If we can achieve comparable or better results, it demonstrates that the general-purpose embedding captures useful information for downstream tasks. 4.1 Relation Extraction We experiment on the popular New York Times (NYT) relation extraction dataset (Riedel et al., 2010). Following GloRE (Su et al., 2018), we aim at augmenting existing relation extractors with the textual relation embeddings. We first average the 3https://nlp.stanford.edu/projects/glove/ 1327 Precision@N 100 300 500 700 900 1000 PCNN+ATT 97.0 93.7 92.8 89.1 85.2 83.9 PCNN+ATT+GloRE 97.0 97.3 94.6 93.3 90.1 89.3 PCNN+ATT+GloRE+ 98.0 98.7 96.6 93.1 89.9 88.8 PCNN+ATT+GloRE++ 98.0 97.3 96.0 93.6 91.0 89.8 Table 1: Relation extraction manual evaluation results: Precision of top 1000 predictions. textual relation embeddings of all contextual sentences of an entity pair, and project the average embedding to the target KB relations. We then construct an ensemble model by a weighted combination of predictions from the base model and the textual relation embedding. Same as (Su et al., 2018), we use PCNN+ATT (Lin et al., 2016) as our base model. GloRE++ improves its best F1-score from 42.7% to 45.2%, slightly outperforming the previous state-of-theart (GloRE, 44.7%). As shown in previous work (Su et al., 2018), on NYT dataset, due to a significant amount of false negatives, the PR curve on the held-out set may not be an accurate measure of performance. Therefore, we mainly employ manual evaluation. We invite graduate students to check top 1000 predictions of each method. They are present with the entity pair, the prediction, and all the contextual sentences of the entity pair. Each prediction is examined by two students until reaching an agreement after discussion. Besides, the students are not aware of the source of the predictions. Table 1 shows the manual evaluation results. Both GloRE+ and GloRE++ get improvements over GloRE. GloRE++ obtains the best results for top 700, 900 and 1000 predictions. 4.2 Knowledge Base Completion We experiment on another relational understanding task, knowledge base (KB) completion, on the popular FB15k-237 dataset (Toutanova et al., 2015). The goal is to predict missing relation facts based on a set of known entities, KB relations, and textual mentions. (Toutanova et al., 2015) use a convolutional neural network (CNN) to model textual relations. We replace their CNN with our pretrained embedding followed by one simple feedforward projection layer. As in (Toutanova et al., 2015), we use the best performing DISTMULT and E+DISTMULT as the base models. DISTMULT (Yang et al., 2015) learns latent vectors for the entities and each relation type, while model E (Riedel et al., 2013) learns two latent vectors for each relation type, associated with its subject and object entities respectively. E+DISTMULT is a combination model that ensembles the predictions from individual models, and is trained jointly. We conduct experiments using only KB relations (KB only), using their CNN to model textual relations (Conv), and using our embedding to model textual relations (Emb). The models are tested on predicting the object entities of a set of KB triples disjoint from the training set, given the subject entity and the relation type. Table 2 shows the performances of all models measured by mean reciprocal rank (MRR) of the correct entity, and HITS@10 (the percentage of test instances for which the correct entity is ranked within the top 10 predictions). We also show the performances on the two subsets of the test set, with and without textual mentions. The pre-trained embedding achieves comparable or better results to the CNN model trained with indomain data. Figure 2: t-SNE visualization of our textual relation embeddings on ClueWeb validation data 5 Analysis t-SNE visualization To measure the intrinsic property of the learned textual relation embedding, 4The result of our implementation is slightly different from the original paper. We have communicated with the authors and agreed on the plausibility of the result. 1328 Model Overall With mentions Without mentions MRR HITS@10 MRR HITS@10 MRR HITS@10 DISTMULT (KB only) 35.8 51.8 27.3 39.5 39 56.3 Conv-DISTMULT 36.5 52.5 28.5 41.4 39.4 56.5 Emb-DISTMULT (GloRE+) 36.4 52.6 28.8 41.8 39.3 56.7 Emb-DISTMULT (GloRE++) 36.6 53.0 28.0 40.8 39.8 57.1 E+DISTMULT (KB only) 37.8 53.5 29.5 43 40.9 57.3 Conv-E+Conv-DISTMULT 38.7 54.4 30.0 43.8 41.9 58.2 Emb-E+Emb-DISTMULT (GloRE+) 38.8 54.2 30.0 43.3 42.0 58.2 Emb-E+Emb-DISTMULT (GloRE++) 38.9 54.4 30.0 43.5 42.1 58.3 Table 2: Results of KB completion on FB15k-237 dataset4, measured by MRR and HITS@10 (Both scaled by 100). Subject and object Francis Clark Howell, Kansas City KB relation people.person.place of birth Textual relation in NYT train set nsubjpass ←−−−−−−−born nmod:on −−−−−−→nov. nmod:in −−−−−→ Corresponding sentence in NYT train set ...Francis Clark Howell was born on nov. 27, 1925, in Kansas City, ... Top-5 nearest neighbors in ClueWeb train set Textual relation Cosine similarity A corresponding sentence in ClueWeb raw data nsubjpass ←−−−−−−−born nmod:in −−−−−→1295 nmod:in −−−−−→ 0.61 ...According to the Lonely Planet Guide to Venice, St. Roch was born in 1295 in Montpellier, France, and at the age of 20 began wandering... nsubjpass ←−−−−−−−born nmod:in −−−−−→1222 nmod:in −−−−−→ 0.61 ...Isabel BIGOD was born in 1222 in Thetford Abbey, Norfolk, England... nsubjpass ←−−−−−−−born dobj −−−→Lannerback nmod:in −−−−−→ 0.60 ...Yngwie (pronounced ”ING-vay”) Malmsteen was born Lars Johann Yngwie Lannerback in Stockholm, Sweden, in 1963, ... nsubjpass ←−−−−−−− born nmod:in −−−−−→ Leigha appos −−−−→ Muzaffargarh nmod:in −−−−−→ 0.57 ...Satya Paul - Indian Designer Satya Paul was born in Leigha, Muzaffargarh in Pakistan, and came to India during the partition times... nsubjpass ←−−−−−−−born nmod:on −−−−−−→raised nmod:in −−−−−→ 0.55 ...Governor Gilmore was born on October 6, 1949 and raised in Richmond, Virginia... Table 3: Case study: Textual relation embedding model can well generalize to unseen textual relations via capturing common shared sub-structures. we apply t-SNE visualization (Maaten and Hinton, 2008) on the learned embedding of ClueWeb validation set. We filter out infrequent textual relations and assign labels to the textual relations when they cooccur more than half of the times with a KB relation. The visualization result of GloRE++ embedding associating with the top-10 frequent KB relations is shown in Figure 2. As we can see, similar textual relations are grouped together while dissimilar ones are separated. This implies that the embedding model can well generate textual relation representation for unseen textual relations, and can potentially serve as relational features to help tasks in unsupervised setting. Case Study To show that the embedding model generalizes to unseen textual relations via capturing crucial textual sub-patterns, we randomly pick some textual relations in NYT train set but not in ClueWeb train set, and compare with its top5 nearest neighbors in ClueWeb train set, based on the similarity of the learned embedding. A case study is shown in Table 3. We can see that the KB relation place of birth often collocates with a preposition in indicating the object fits into a location type, and some key words like born. Together, the sub-structure born in serves as a strong indicator for place of birth relation. There is almost always some redundant information in the textual relations, for example in the textual relation nsubjpass ←−−−−−−born nmod:on −−−−−→nov. nmod:in −−−−−→, the sub-structure nmod:on −−−−−→nov. does not carry crucial information indicating the target relation. A good textual relation embedding model should be capable of learning to attend to the crucial semantic patterns. Acknowledgment The authors would like to thank the anonymous reviewers for their thoughtful comments. This research was sponsored in part by the Army Research Laboratory under cooperative agreements W911NF09-2-0053 and NSF IIS 1528175. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein. 1329 References Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the ACM SIGMOD International conference on Management of data, pages 1247–1250. ACM. Razvan C Bunescu and Raymond J Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proceedings of Conference on Empirical Methods in Natural Language Processing, pages 724–731. Association for Computational Linguistics. Jamie Callan, Mark Hoy, Changkuk Yoo, and Le Zhao. 2009. Clueweb09 data set. Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of Conference on Empirical Methods in Natural Language Processing, pages 740–750. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Evgeniy Gabrilovich, Michael Ringgaard, and Amarnag Subramanya. 2013. FACC1: Freebase annotation of ClueWeb corpora, version 1 (release date 2013-06-26, format version 1, correction level 0). http://lemurproject.org/clueweb09/. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 541–550. Association for Computational Linguistics. Guoliang Ji, Kang Liu, Shizhu He, Jun Zhao, et al. 2017. Distant supervision for relation extraction with sentence-level attention and entity descriptions. In Proceedings of the AAAI Conference on Artificial Intelligence. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294–3302. Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the International Conference on Machine Learning, pages 1188–1196. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 2124–2133. Association for Computational Linguistics. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the Annual Conference on Neural Information Processing Systems, pages 3111–3119. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1003–1011. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of Conference on Empirical Methods in Natural Language Processing, pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 148–163. Springer. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics. Yu Su, Honglei Liu, Semih Yavuz, Izzeddin Gur, Huan Sun, and Xifeng Yan. 2018. Global relation embedding for relation extraction. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of Conference on Empirical Methods in Natural Language Processing, pages 455–465. Association for Computational Linguistics. 1330 Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of Conference on Empirical Methods in Natural Language Processing. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the Annual Conference on Neural Information Processing Systems, pages 6000–6010. Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, and Andrew McCallum. 2016. Multilingual relation extraction using compositional universal schema. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics. Yi Wu, David Bamman, and Stuart Russell. 2017. Adversarial training for relation extraction. In Proceedings of Conference on Empirical Methods in Natural Language Processing. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of Conference on Empirical Methods in Natural Language Processing, pages 1785–1794. Association for Computational Linguistics. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of the International Conference on Learning Representations. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of Conference on Empirical Methods in Natural Language Processing, pages 1753–1762. Association for Computational Linguistics.
2019
127
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1331–1339 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1331 Graph Neural Networks with Generated Parameters for Relation Extraction Hao Zhu1 Yankai Lin1 Zhiyuan Liu1 Jie Fu2 Tat-seng Chua3 Maosong Sun1 1 State Key Lab on Intelligent Technology and Systems Department of Computer Science and Technology Institute for Artificial Intelligence, Tsinghua Univerisity, Beijing, China 2 National University of Singapore, Singapore 3 Universit´e de Montr´eal, Montr´eal, Qu´ebec, Canada {{zhuhao15,linyk14}@mails,{liuzy, sms}@}tsinghua.edu.cn [email protected],[email protected] Abstract In this paper, we propose a novel graph neural network with generated parameters (GPGNNs). The parameters in the propagation module, i.e. the transition matrices used in message passing procedure, are produced by a generator taking natural language sentences as inputs. We verify GP-GNNs in relation extraction from text, both on bag- and instancesettings. Experimental results on a humanannotated dataset and two distantly supervised datasets show that multi-hop reasoning mechanism yields significant improvements. We also perform a qualitative analysis to demonstrate that our model could discover more accurate relations by multi-hop relational reasoning. Codes and data are released at https: //github.com/thunlp/gp-gnn. 1 Introduction In recent years, graph neural networks (GNNs) have been applied to various fields of machine learning, including node classification (Kipf and Welling, 2016), relation classification (Schlichtkrull et al., 2017), molecular property prediction (Gilmer et al., 2017), few-shot learning (Garcia and Bruna, 2018), and achieved promising results on these tasks. These works have demonstrated GNNs’ strong power to process relational reasoning on graphs. Relational reasoning aims to abstractly reason about entities/objects and their relations, which is an important part of human intelligence. Besides graphs, relational reasoning is also of great importance in many natural language processing The original idea comes from several discussions between Hao Zhu and Jie Fu while Hao Zhu visiting NUS; Hao Zhu designed research, prepared datasets, and conducted experiments; Jie Fu, Yankai Lin, and Zhiyuan Liu also participated in discussion while planning experiments; Zhiyuan Liu, Tat-seng Chua and Maosong Sun proofread the paper. Zhiyuan Liu serves as the corresponding author. tasks such as question answering, relation extraction, summarization, etc. Consider the example shown in Fig. 1, existing relation extraction models could easily extract the facts that Luc Besson directed a film L´eon: The Professional and that the film is in English, but fail to infer the relationship between Luc Besson and English without multi-hop relational reasoning. By considering the reasoning patterns, one can discover that Luc Besson could speak English following a reasoning logic that Luc Besson directed L´eon: The Professional and this film is in English indicates Luc Besson could speak English. However, most existing GNNs can only process multi-hop relational reasoning on pre-defined graphs and cannot be directly applied in natural language relational reasoning. Enabling multi-hop relational reasoning in natural languages remains an open problem. To address this issue, in this paper, we propose graph neural networks with generated parameters (GP-GNNs), to adapt graph neural networks to solve the natural language relational reasoning task. GP-GNNs first constructs a fullyconnected graph with the entities in the sequence of text. After that, it employs three modules to process relational reasoning: (1) an encoding module which enables edges to encode rich information from natural languages, (2) a propagation module which propagates relational information among various nodes, and (3) a classification module which makes predictions with node representations. As compared to traditional GNNs, GP-GNNs could learn edge parameters from natural languages, extending it from performing inference on only non-relational graphs or graphs with a limited number of edge types to unstructured inputs such as texts. In the experiments, we apply GP-GNNs to a classic natural language relational reasoning task: 1332 Léon: The Professional is a 1996 English-language French thriller film directed by Luc Besson. Léon English Luc Besson Language Spoken Language Cast member Figure 1: An example of relation extraction from plain text. Given a sentence with several entities marked, we model the interaction between these entities by generating the weights of graph neural networks. Modeling the relationship between “L´eon” and “English” as well as “Luc Besson” helps discover the relationship between “Luc Besson” and “English”. relation extraction from text. We carry out experiments on Wikipedia corpus aligned with Wikidata knowledge base (Vrandeˇci´c and Kr¨otzsch, 2014) and build a human annotated test set as well as two distantly labeled test sets with different levels of denseness.Experiment results show that our model outperforms other models on relation extraction task by considering multi-hop relational reasoning. We also perform a qualitative analysis which shows that our model could discover more relations by reasoning more robustly as compared to baseline models. Our main contributions are in two-fold: (1) We extend a novel graph neural network model with generated parameters, to enable relational message-passing with rich text information, which could be applied to process relational reasoning on unstructured inputs such as natural language. (2) We verify our GP-GNNs on the task of relation extraction from text, which demonstrates its ability on multi-hop relational reasoning as compared to those models which extract relationships separately. Moreover, we also present three datasets, which could help future researchers compare their models in different settings. 2 Related Work 2.1 Graph Neural Networks (GNNs) GNNs were first proposed in (Scarselli et al., 2009) and are trained via the Almeida-Pineda algorithm (Almeida, 1987). Later the authors in Li et al. (2016) replace the Almeida-Pineda algorithm with the more generic backpropagation and demonstrate its effectiveness empirically. Gilmer et al. (2017) propose to apply GNNs to molecular property prediction tasks. Garcia and Bruna (2018) shows how to use GNNs to learn classifiers on image datasets in a few-shot manner. Gilmer et al. (2017) study the effectiveness of message-passing in quantum chemistry. Dhingra et al. (2017) apply message-passing on a graph constructed by coreference links to answer relational questions. There are relatively fewer papers discussing how to adapt GNNs to natural language tasks. For example, Marcheggiani and Titov (2017) propose to apply GNNs to semantic role labeling and Schlichtkrull et al. (2017) apply GNNs to knowledge base completion tasks. Zhang et al. (2018) apply GNNs to relation extraction by encoding dependency trees, and De Cao et al. (2018) apply GNNs to multi-hop question answering by encoding co-occurence and coreference relationships. Although they also consider applying GNNs to natural language processing tasks, they still perform message-passing on predefined graphs. Johnson (2017) introduces a novel neural architecture to generate a graph based on the textual input and dynamically update the relationship during the learning process. In sharp contrast, this paper focuses on extracting relations from real-world relation datasets. 2.2 Relational Reasoning Relational reasoning has been explored in various fields. For example, Santoro et al. (2017) propose a simple neural network to reason the relationship of objects in a picture, Xu et al. (2017) build up a scene graph according to an image, and Kipf et al. (2018) model the interaction of physical objects. In this paper, we focus on the relational reasoning in the natural language domain. Existing works (Zeng et al., 2014, 2015; Lin et al., 2016) have demonstrated that neural networks are capa1333 ble of capturing the pair-wise relationship between entities in certain situations. For example, Zeng et al. (2014) is one of the earliest works that applies a simple CNN to this task, and Zeng et al. (2015) further extends it with piece-wise maxpooling. Nguyen and Grishman (2015) propose a multi-window version of CNN for relation extraction. Lin et al. (2016) study an attention mechanism for relation extraction tasks. Peng et al. (2017) predict n-ary relations of entities in different sentences with Graph LSTMs. Le and Titov (2018) treat relations as latent variables which are capable of inducing the relations without any supervision signals. Zeng et al. (2017) show that the relation path has an important role in relation extraction. Miwa and Bansal (2016) show the effectiveness of LSTMs (Hochreiter and Schmidhuber, 1997) in relation extraction. Christopoulou et al. (2018) proposed a walk-based model to do relation extraction. The most related work is Sorokin and Gurevych (2017), where the proposed model incorporates contextual relations with an attention mechanism when predicting the relation of a target entity pair. The drawback of existing approaches is that they could not make full use of the multihop inference patterns among multiple entity pairs and their relations within the sentence. 3 Graph Neural Network with Generated Parameters (GP-GNNs) We first define the task of natural language relational reasoning. Given a sequence of text with m entities, it aims to reason on both the text and entities and make a prediction of the labels of the entities or entity pairs. In this section, we will introduce the general framework of GP-GNNs. GP-GNNs first build a fully-connected graph G = (V, E), where V is the set of entities, and each edge (vi, vj) ∈ E, vi, vj ∈V corresponds to a sequence s = xi,j 0 , xi,j 1 , . . . , xi,j l−1 extracted from the text. After that, GP-GNNs employ three modules including (1) encoding module, (2) propagation module and (3) classification module to process relational reasoning, as shown in Fig. 2. 3.1 Encoding Module The encoding module converts sequences into transition matrices corresponding to edges, i.e. the parameters of the propagation module, by A(n) i,j = f(E(xi,j 0 ), E(xi,j 1 ), · · · , E(xi,j l−1); θn e ), (1) where f(·) could be any model that could encode sequential data, such as LSTMs, GRUs, CNNs, E(·) indicates an embedding function, and θn e denotes the parameters of the encoding module of n-th layer. 3.2 Propagation Module The propagation module learns representations for nodes layer by layer. The initial embeddings of nodes, i.e. the representations of layer 0, are task-related, which could be embeddings that encode features of nodes or just one-hot embeddings. Given representations of layer n, the representations of layer n + 1 are calculated by h(n+1) i = X vj∈N(vi) σ(A(n) i,j h(n) j ), (2) where N(vi) denotes the neighbours of node vi in graph G and σ(·) denotes a non-linear activation function. 3.3 Classification Module Generally, the classification module takes node representations as inputs and outputs predictions. Therefore, the loss of GP-GNNs could be calculated as L = g(h0 0:|V|−1, h1 0:|V|−1, . . . , hK 0:|V|−1, Y ; θc), (3) where θc denotes the parameters of the classification module, K is the number of layers in propagation module and Y denotes the ground truth label. The parameters in GP-GNNs are trained by gradient descent methods. 4 Relation Extraction with GP-GNNs Relation extraction from text is a classic natural language relational reasoning task. Given a sentence s = (x0, x1, . . . , xl−1), a set of relations R and a set of entities in this sentence Vs = {v1, v2, . . . , v|Vs|}, where each vi consists of one or a sequence of tokens, relation extraction from text is to identify the pairwise relationship rvi,vj ∈R between each entity pair (vi, vj). In this section, we will introduce how to apply GP-GNNs to relation extraction. 1334 Encoding Module Propagation Module Classification Module h(n) 1 h(n) 2 h(n) 3 A(n) 1,2 A(n) 2,3 A(n) 3,1 x1,2 3 x1,2 4 x1,2 2 x1,2 1 x1,2 0 Figure 2: Overall architecture: an encoding module takes a sequence of vector representations as inputs, and output a transition matrix as output; a propagation module propagates the hidden states from nodes to its neighbours with the generated transition matrix; a classification module provides task-related predictions according to nodes representations. 4.1 Encoding Module To encode the context of entity pairs (or edges in the graph), we first concatenate the position embeddings with word embeddings in the sentence: E(xi,j t ) = [xt; pi,j t ], (4) where xt denotes the word embedding of word xt and pi,j t denotes the position embedding of word position t relative to the entity pair’s position i, j (Details of these two embeddings are introduced in the next two paragraphs.) After that, we feed the representations of entity pairs into encoder f(·) which contains a bi-directional LSTM and a multilayer perceptron: A(n) i,j = [MLPn(BiLSTMn((E(xi,j 0 ), E(xi,j 1 ), · · · , E(xi,j l−1))], (5) where n denotes the index of layer 1, [·] means reshaping a vector as a matrix, BiLSTM encodes a sequence by concatenating tail hidden states of the forward LSTM and head hidden states of the backward LSTM together and MLP denotes a multilayer perceptron with non-linear activation σ. Word Representations We first map each token xt of sentence {x0, x1, . . . , xl−1} to a kdimensional embedding vector xt using a word embedding matrix We ∈R|V |×dw, where |V | is the size of the vocabulary. Throughout this paper, we stick to 50-dimensional GloVe embeddings pre-trained on a 6-billion-word corpus (Pennington et al., 2014). 1Adding index to neural models means their parameters are different among layers. Position Embedding In this work, we consider a simple entity marking scheme2: we mark each token in the sentence as either belonging to the first entity vi, the second entity vj or to neither of those. Each position marker is also mapped to a dp-dimensional vector by a position embedding matrix P ∈R3×dp. We use notation pi,j t to represent the position embedding for xt corresponding to entity pair (vi, vj). 4.2 Propagation Module Next, we use Eq. (2) to propagate information among nodes where the initial embeddings of nodes and number of layers are further specified as follows. The Initial Embeddings of Nodes Suppose we are focusing on extracting the relationship between entity vi and entity vj, the initial embeddings of them are annotated as h(0) vi = asubject, and h(0) vj = aobject, while the initial embeddings of other entities are set to all zeros. We set special values for the head and tail entity’s initial embeddings as a kind of “flag” messages which we expect to be passed through propagation. Annotators asubject and aobject could also carry the prior knowledge about subject entity and object entity. In our experiments, we generalize the idea of Gated Graph Neural Networks (Li et al., 2016) by setting asubject = [1; 0]⊤and aobject = [0; 1]⊤3. 2As pointed out by Sorokin and Gurevych (2017), other position markers lead to no improvement in performance. 3The dimensions of 1 and 0 are the same. Hence, dr should be positive even integers. The embedding of subject and object could also carry the type information by changing annotators. We leave this extension for future work. 1335 Number of Layers In general graphs, the number of layers K is chosen to be of the order of the graph diameter so that all nodes obtain information from the entire graph. In our context, however, since the graph is densely connected, the depth is interpreted simply as giving the model more expressive power. We treat K as a hyperparameter, the effectiveness of which will be discussed in detail (Sect. 5.4). 4.3 Classification Module The output module takes the embeddings of the target entity pair (vi, vj) as input, which are first converted by: rvi,vj = [[h(1) vi ⊙h(1) vj ]⊤; [h(2) vi ⊙h(2) vj ]⊤; . . . ; [h(K) vi ⊙h(K) vj ]⊤], (6) where ⊙represents element-wise multiplication. This could be used for classification: P(rvi,vj|h, t, s) = softmax(MLP(rvi,vj)), (7) where rvi,vj ∈R, and MLP denotes a multi-layer perceptron module. We use cross entropy here as the classification loss L = X s∈S X i̸=j log P(rvi,vj|i, j, s), (8) where rvi,vj denotes the relation label for entity pair (vi, vj) and S denotes the whole corpus. In practice, we stack the embeddings for every target entity pairs together to infer the underlying relationship between each pair of entities. We use PyTorch (Paszke et al., 2017) to implement our models. To make it more efficient, we avoid using loop-based, scalar-oriented code by matrix and vector operations. 5 Experiments Our experiments mainly aim at: (1) showing that our best models could improve the performance of relation extraction under a variety of settings; (2) illustrating that how the number of layers affect the performance of our model; and (3) performing a qualitative investigation to highlight the difference between our models and baseline models. In both part (1) and part (2), we do three subparts of experiments: (i) we will first show that our models could improve instance-level relation extraction on a human annotated test set, and (ii) then we will show that our models could also help enhance the performance of bag-level relation extraction on a distantly labeled test set 4, and (iii) we also split a subset of distantly labeled test set, where the number of entities and edges is large. 5.1 Experiment Settings 5.1.1 Datasets Distantly labeled set Sorokin and Gurevych (2017) have proposed a dataset with Wikipedia corpora. There is a small difference between our task and theirs: our task is to extract the relationship between every pair of entities in the sentence, whereas their task is to extract the relationship between the given entity pair and the context entity pairs. Therefore, we need to modify their dataset: (1) We added reversed edges if they are missing from a given triple, e.g. if triple (Earth, part of, Solar System) exists in the sentence, we add a reversed label, (Solar System, has a member, Earth), to it; (2) For all of the entity pairs with no relations, we added “NA” labels to them.5 We use the same training set for all of the experiments. Human annotated test set Based on the test set provided by (Sorokin and Gurevych, 2017), 5 annotators6 are asked to label the dataset. They are asked to decide whether or not the distant supervision is right for every pair of entities. Only the instances accepted by all 5 annotators are incorporated into the human annotated test set. There are 350 sentences and 1,230 triples in this test set. Dense distantly labeled test set We further split a dense test set from the distantly labeled test set. Our criteria are: (1) the number of entities should be strictly larger than 2; and (2) there must be at least one circle (with at least three entities) in the ground-truth label of the sentence 7. This test set could be used to test our methods’ performance on sentences with the complex interaction between entities. There are 1,350 sentences and more than 17,915 triples and 7,906 relational facts in this test set. 4Bag-level relation extraction is a widely accepted scheme for relation extraction with distant supervision, which means the relation of an entity pair is predicted by aggregating a bag of instances. 5We also resolve entities at the same position and remove self-loops from the previous dataset. Furthermore, we limit the number of entities in one sentence to 9, resulting in only 0.0007 data loss. 6They are all well-educated university students. 7Every edge in the circle has a non-“NA” label. 1336 5.1.2 Models for Comparison We select the following models for comparison, the first four of which are our baseline models. Context-Aware RE, proposed by Sorokin and Gurevych (2017). This model utilizes attention mechanism to encode the context relations for predicting target relations. It was the state-of-the-art models on Wikipedia dataset. This baseline is implemented by ourselves based on authors’ public repo8. Multi-Window CNN. Zeng et al. (2014) utilize convolutional neural networks to classify relations. Different from the original version of CNN proposed in Zeng et al. (2014), our implementation, follows Nguyen and Grishman (2015), concatenates features extracted by three different window sizes: 3, 5, 7. PCNN, proposed by Zeng et al. (2015). This model divides the whole sentence into three pieces and applies max-pooling after convolution layer piece-wisely. For CNN and following PCNN, the entity markers are the same as originally proposed in Zeng et al. (2014, 2015). LSTM or GP-GNN with K = 1 layer. Bidirectional LSTM (Schuster and Paliwal, 1997) could be seen as an 1-layer variant of our model. GP-GNN with K = 2 or K = 3 layers. These models are capable of performing 2-hop reasoning and 3-hop reasoning, respectively. 5.1.3 Hyper-parameters We select the best parameters for the validation set. We select non-linear activation functions between relu and tanh, and select dn among {2, 4, 8, 12, 16}9. We have also tried two forms of adjacent matrices: tied-weights (set A(n) = A(n+1)) and untied-weights. Table 1 shows our best hyper-parameter settings, which are used in all of our experiments. 5.2 Evaluation Details So far, we have only talked about the way to implement sentence-level relation extraction. To evaluate our models and baseline models in bag-level, we utilize a bag of sentences with a given entity pair to score the relations between them. Zeng et al. (2015) formalize the bag-level relation extraction as multi-instance learning. Here, we fol8https://github.com/UKPLab/ emnlp2017-relation-extraction 9We set all dns to be the same as we do not see improvements using different dns Hyper-parameters Value learning rate 0.001 batch size 50 dropout ratio 0.5 hidden state size 256 non-linear activation σ relu embedding size for #layers = 1 8 embedding size for #layers = 2 and 3 12 adjacent matrices untied Table 1: Hyper-parameters settings. low their idea and define the score function of an entity pair and its corresponding relation r as a max-one setting: E(r|vi, vj, S) = max s∈S P(rvi,vj|i, j, s). (9) Dataset Human Annotated Test Set Metric Acc Macro F1 Multi-Window CNN 47.3 17.5 PCNN 30.8 3.2 Context-Aware RE 68.9 44.9 GP-GNN (#layers=1) 62.9 44.1 GP-GNN (#layers=2) 69.5 44.2 GP-GNN (#layers=3) 75.3 47.9 Table 2: Results on human annotated dataset 5.3 Effectiveness of Reasoning Mechanism From Table 2 and 3, we can see that our best models outperform all the baseline models significantly on all three test sets. These results indicate our model could successfully conduct reasoning on the fully-connected graph with generated parameters from natural language. These results also indicate that our model not only performs well on sentence-level relation extraction but also improves on bag-level relation extraction. Note that Context-Aware RE also incorporates context information to predict the relation of the target entity pair, however, we argue that Context-Aware RE only models the co-occurrence of various relations, ignoring whether the context relation participates in the reasoning process of relation extraction of the target entity pair. Context-Aware RE may introduce more noise, for it may mistakenly increase the probability of a relation with the similar topic with the context relations. We will give samples to illustrate this issue in Sect. 5.5. Another interesting observation is that our #layers=1 version outperforms CNN and PCNN in these three datasets. One probable reason is that sentences from Wikipedia are often complex, 1337 Dataset Distantly Labeled Test Set Dense Distantly Labeled Test Set Metric P@5% P@10% P@15% P@20% P@5% P@10% P@15% P@20% Multi-Window CNN 78.9 78.4 76.2 72.9 86.2 83.4 81.4 79.1 PCNN 73.0 65.4 58.1 51.2 85.3 79.1 72.4 68.1 Context-Aware RE 90.8 89.9 88.5 87.2 93.5 93.0 93.8 93.0 GP-GNN (#layers=1) 90.5 89.9 88.2 87.2 97.4 93.5 92.4 91.9 GP-GNN (#layers=2) 92.5 92.0 89.3 87.1 95.0 94.6 95.2 94.2 GP-GNN (#layers=3) 94.2 92.0 89.7 88.3 98.5 97.4 96.6 96.1 Table 3: Results on distantly labeled test set which may be hard to model for CNN and PCNN. Similar conclusions are also reached by Zhang and Wang (2015). 0.00 0.05 0.10 0.15 0.20 0.25 Recall 0.80 0.82 0.84 0.86 0.88 0.90 0.92 0.94 0.96 Precision Ours(#layers=3) Ours(#layers=2) Ours(#layers=1) Context Aware RE 0.00 0.05 0.10 0.15 0.20 0.25 Recall 0.90 0.92 0.94 0.96 0.98 1.00 Precision Ours(#layers=3) Ours(#layers=2) Ours(#layers=1) Context Aware RE Figure 3: The aggregated precision-recall curves of our models with different number of layers on distantly labeled test set (left) and dense distantly labeled test set (right). We also add Context Aware RE for comparison. 5.4 The Effectiveness of the Number of Layers The number of layers represents the reasoning ability of our models. A K-layer version has the ability to infer K-hop relations. To demonstrate the effects of the number of layers, we also compare our models with different numbers of layers. From Table 2 and Table 3, we could see that on all three datasets, 3-layer version achieves the best. We could also see from Fig. 3 that as the number of layers grows, the curves get higher and higher precision, indicating considering more hops in reasoning leads to better performance. However, the improvement of the third layer is much smaller on the overall distantly supervised test set than the one on the dense subset. This observation reveals that the reasoning mechanism could help us identify relations especially on sentences where there are more entities. We could also see that on the human annotated test set 3layer version to have a greater improvement over 2-layer version as compared with 2-layer version over 1-layer version. It is probably due to the reason that bag-level relation extraction is much easier. In real applications, different variants could be selected for different kind of sentences or we can also ensemble the prediction from different models. We leave these explorations for future work. 5.5 Qualitative Results: Case Study Tab. 4 shows qualitative results that compare our GP-GNN model and the baseline models. The results show that GP-GNN has the ability to infer the relationship between two entities with reasoning. In the first case, GP-GNN implicitly learns a logic rule ∃y, x ∼cast-member −−−−−−−−→y original language −−−−−−−−−→z ⇒ x language spoken −−−−−−−−−→z to derive (Oozham, language spoken, Malayalam) and in the second case our model implicitly learns another logic rule ∃y, x owned-by −−−−−→y located in −−−−−→z ⇒x located in −−−−−→z to find the fact (BankUnited Center, located in, English). Note that (BankUnited Center, located in, English) is even not in Wikidata, but our model could identify this fact through reasoning. We also find that Context-Aware RE tends to predict relations with similar topics. For example, in the third case, share border with and located in are both relations about ter1338 The association was organized in Enterprise (now known as Redbush) Johnson County, Kentucky in 1894 and was incorporated in 1955, after relocating to Gallipolis, Ohio. Sentence GP-GNNs (#layers = 3) LSTM Context Aware Relation Extraction Oozham ( or Uzham ) is an upcoming 2016 Malayalam drama film written and directed by Jeethu Joseph with Prithviraj Sukumaran in the lead role. Ground Truth The third annual of the 2006 Premios Juventud (Youth Awards) edition will be held on July 13, 2006 at the BankUnited Center from the University of Miami in Coral Gables, Florida . Oozham Malayalam Jeethu Joseph Prithviraj Sukumaran cast member director original language language spoken Oozham Malayalam Jeethu Joseph Prithviraj Sukumaran cast member director original language language spoken Oozham Malayalam Jeethu Joseph Prithviraj Sukumaran cast member director original language Oozham Malayalam Jeethu Joseph Prithviraj Sukumaran cast member director original language BankUnited Center University of Miami Coral Gables, Florida located in the administrative territorial entity BankUnited Center University of Miami Coral Gables, Florida located in the administrative territorial entity BankUnited Center University of Miami Coral Gables, Florida owned by located in the administrative territorial entity BankUnited Center University of Miami Coral Gables, Florida owned by located in the administrative territorial entity located in the administrative territorial entity Redbush Johnson County Kentucky Ohio located in the administrative territorial entity located in the administrative territorial entity Redbush Johnson County Kentucky Ohio located in the administrative territorial entity located in the administrative territorial entity Redbush Johnson County Kentucky Ohio located in the administrative territorial entity located in the administrative territorial entity Redbush Johnson County Kentucky Ohio located in the administrative territorial entity located in the administrative territorial entity share border with Table 4: Sample predictions from the baseline models and our GP-GNN model. Ground truth graphs are the subgraph in Wikidata knowledge graph induced by the sets of entities in the sentences. The models take sentences and entity markers as input and produce a graph containing entities (colored and bold) and relations between them. Although “No Relation” is also be seen as a type of relation, we only show other relation types in the graphs. ritory issues. Consequently, Context-Aware RE makes a mistake by predicting (Kentucky, share boarder with, Ohio). As we have discussed before, this is due to its mechanism to model cooccurrence of multiple relations. However, in our model, since Ohio and Johnson County have no relationship, this wrong relation is not predicted. 6 Conclusion and Future Work We addressed the problem of utilizing GNNs to perform relational reasoning with natural languages. Our proposed model, GP-GNN, solves the relational message-passing task by encoding natural language as parameters and performing propagation from layer to layer. Our model can also be considered as a more generic framework for graph generation problem with unstructured input other than text, e.g. image, video, audio. In this work, we demonstrate its effectiveness in predicting the relationship between entities in natural language and bag-level and show that by considering more hops in reasoning the performance of relation extraction could be significantly improved. Acknowledgement The authors thank the members of Tsinghua NLP lab10 for their thoughtful suggestions. This work 10 http://thunlp.org is jointly supported by the NSFC project under the grant No. 61661146007 and the NExT++ project, the National Research Foundation, Prime Ministers Office, Singapore under its IRC@Singapore Funding Initiative. Hao Zhu is supported by Tsinghua Initiative Research Program. References Luis B Almeida. 1987. A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. In Proceedings, 1st First International Conference on Neural Networks, pages 609– 618. IEEE. Fenia Christopoulou, Makoto Miwa, and Sophia Ananiadou. 2018. A walk-based model on entity graphs for relation extraction. In Proceedings of ACL, volume 2, pages 81–88. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2018. Question answering by reasoning across documents with graph convolutional networks. arXiv preprint arXiv:1808.09920. Bhuwan Dhingra, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2017. Linguistic knowledge as memory for recurrent neural networks. arXiv preprint arXiv:1703.02620. JVictor Garcia and Joan Bruna. 2018. Few-shot learning with graph neural networks. In Proceedings of ICLR. 1339 Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. 2017. Neural message passing for quantum chemistry. In Proceedings of ICML. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, pages 1735–1780. Daniel D Johnson. 2017. Learning graphical state transitions. In Proceedings of ICLR. Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel. 2018. Neural relational inference for interacting systems. In Proceedings of ICML. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. Proceedings of ICLR. Phong Le and Ivan Titov. 2018. Improving entity linking by modeling latent relations between mentions. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. 2016. Gated graph sequence neural networks. Proceedings of ICLR. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of ACL, pages 2124–2133. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings EMNLP. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of ACL, pages 1105– 1116. Thien Huu Nguyen and Ralph Grishman. 2015. Relation extraction: Perspective from convolutional neural networks. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 39–48. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph lstms. Transactions of the Association for Computational Linguistics, pages 101–115. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP, pages 1532–1543. Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Tim Lillicrap. 2017. A simple neural network module for relational reasoning. In Proceedings of NIPS, pages 4967–4976. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2009. The graph neural network model. IEEE Transactions on Neural Networks, pages 61–80. Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2017. Modeling relational data with graph convolutional networks. arXiv preprint arXiv:1703.06103. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, pages 2673–2681. Daniil Sorokin and Iryna Gurevych. 2017. Contextaware representations for knowledge base relation extraction. In Proceedings of EMNLP, pages 1784– 1789. Denny Vrandeˇci´c and Markus Kr¨otzsch. 2014. Wikidata: a free collaborative knowledgebase. Communications of the ACM. Danfei Xu, Yuke Zhu, Christopher B Choy, and Li FeiFei. 2017. Scene graph generation by iterative message passing. In Proceedings of CVPR, volume 2. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of EMNLP, pages 1753–1762. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING, pages 2335–2344. Wenyuan Zeng, Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2017. Incorporating relation paths in neural relation extraction. In Proceedings of EMNLP. Dongxu Zhang and Dong Wang. 2015. Relation classification via recurrent neural network. arXiv preprint arXiv:1508.01006. Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of EMNLP.
2019
128
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1340–1350 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1340 Entity-Relation Extraction as Multi-turn Question Answering Xiaoya Li∗♠, Fan Yin∗♠, Zijun Sun∗♦♠, Xiayu Li♠ Arianna Yuan♠,♥, Duo Chai♠, Mingxin Zhou♠and Jiwei Li♠,♣ ♣School of Information, Renmin University of China ♦Computer Center, Peking University ♥Computer Science Department, Stanford University ♠Shannon.AI {xiaoya li, fan yin, zijun sun, xiayu li, duo chai, mingxin zhou, jiwei li}@shannonai.com [email protected] Abstract In this paper, we propose a new paradigm for the task of entity-relation extraction. We cast the task as a multi-turn question answering problem, i.e., the extraction of entities and relations is transformed to the task of identifying answer spans from the context. This multi-turn QA formalization comes with several key advantages: firstly, the question query encodes important information for the entity/relation class we want to identify; secondly, QA provides a natural way of jointly modeling entity and relation; and thirdly, it allows us to exploit the well developed machine reading comprehension (MRC) models. Experiments on the ACE and the CoNLL04 corpora demonstrate that the proposed paradigm significantly outperforms previous best models. We are able to obtain the stateof-the-art results on all of the ACE04, ACE05 and CoNLL04 datasets, increasing the SOTA results on the three datasets to 49.4 (+1.0), 60.2 (+0.6) and 68.9 (+2.1), respectively. Additionally, we construct a newly developed dataset RESUME in Chinese, which requires multi-step reasoning to construct entity dependencies, as opposed to the single-step dependency extraction in the triplet exaction in previous datasets. The proposed multi-turn QA model also achieves the best performance on the RESUME dataset. 1 1 Introduction Identifying entities and their relations is the prerequisite of extracting structured knowledge from unstructured raw texts, which has recieved growing interest these years. Given a chunk of natural language text, the goal of entity-relation extraction is to transform it to a structural knowledge base. For example, given the following text: 1* indicates equal contribution. Person Corp Time Position Musk SpaceX 2002 CEO Musk Tesla 2003 CEO& product architect Musk SolarCity 2006 chairman Musk Neuralink 2016 CEO Musk The Boring Company 2016 Table 1: An illustration of an extracted structural table. In 2002, Musk founded SpaceX, an aerospace manufacturer and space transport services Company, of which he is CEO and lead designer. He helped fund Tesla, Inc., an electric vehicle and solar panel manufacturer, in 2003, and became its CEO and product architect. In 2006, he inspired the creation of SolarCity, a solar energy services Company, and operates as its chairman. In 2016, he co-founded Neuralink, a neurotechnology Company focused on developing brain–computer interfaces, and is its CEO. In 2016, Musk founded The Boring Company, an infrastructure and tunnelconstruction Company. We need to extract four different types of entities, i.e., Person, Company, Time and Position, and three types of relations, FOUND, FOUNDING-TIME and SERVING-ROLE. The text is to be transformed into a structural dataset shown in Table 1. Most existing models approach this task by extracting a list of triples from the text, i.e., REL(e1, e2), which denotes that relation REL holds between entity e1 and entity e2. Previous models fall into two major categories: the pipelined approach, which first uses tagging models to identify entities, and then uses relation extraction models to identify the relation between each entity pair; and the joint approach, which combines the entity model and the relation model throught different strategies, such as constraints or parameters sharing. There are several key issues with current approaches, both in terms of the task formalization 1341 and the algorithm. At the formalization level, the REL(e1, e2) triplet structure is not enough to fully express the data structure behind the text. Take the Musk case as an example, there is a hierarchical dependency between the tags: the extraction of Time depends on Position since a Person can hold multiple Positions in a Company during different Time periods. The extraction of Position also depends on Company since a Person can work for multiple companies. At the algorithm level, for most existing relation extraction models (Miwa and Bansal, 2016; Wang et al., 2016a; Ye et al., 2016), the input to the model is a raw sentence with two marked mentions, and the output is whether a relation holds between the two mentions. As pointed out in Wang et al. (2016a); Zeng et al. (2018), it is hard for neural models to capture all the lexical, semantic and syntactic cues in this formalization, especially when (1) entities are far away; (2) one entity is involved in multiple triplets; or (3) relation spans have overlaps2. In the paper, we propose a new paradigm to handle the task of entity-relation extraction. We formalize the task as a multi-turn question answering task: each entity type and relation type is characterized by a question answering template, and entities and relations are extracted by answering template questions. Answers are text spans, extracted using the now standard machine reading comprehension (MRC) framework: predicting answer spans given context (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2017; Wang et al., 2016b). To extract structural data like Table 1, the model need to answer the following questions sequentially: • Q: who is mentioned in the text? A: Musk; • Q: which Company / companies did Musk work for? A: SpaceX, Tesla, SolarCity, Neuralink and The Boring Company; • Q: when did Musk join SpaceX? A: 2002; • Q: what was Musk’s Position in SpaceX? A: CEO. Treating the entity-relation extraction task as a multi-turn QA task has the following key advantages: (1) the multi-turn QA setting provides an elegant way to capture the hierarchical dependency of tags. As the multi-turn QA proceeds, we progressively obtain the entities we need for the next turn. This is closely akin to the multi-turn slot filling dialogue system (Williams and Young, 2005; Lemon et al., 2006); (2) the question query encodes important prior information for the relation 2e.g., in text A B C D, (A, C) is a pair and (B, D) is a pair. class we want to identify. This informativeness can potentially solve the issues that existing relation extraction models fail to solve, such as distantlyseparated entity pairs, relation span overlap, etc; (3) the QA framework provides a natural way to simultaneously extract entities and relations: most MRC models support outputting special NONE tokens, indicating that there is no answer to the question. Throught this, the original two tasks, entity extraction and relation extraction can be merged to a single QA task: a relation holds if the returned answer to the question corresponding to that relation is not NONE, and this returned answer is the entity that we wish to extract. In this paper, we show that the proposed paradigm, which transforms the entity-relation extraction task to a multi-turn QA task, introduces significant performance boost over existing systems. It achieves state-of-the-art (SOTA) performance on the ACE and the CoNLL04 datasets. The tasks on these datasets are formalized as triplet extraction problems, in which two turns of QA suffice. We thus build a more complicated and more difficult dataset called RESUME which requires to extract biographical information of individuals from raw texts. The construction of structural knowledge base from RESUME requires four or five turns of QA. We also show that this multi-turn QA setting could easilty integrate reinforcement learning (just as in multi-turn dialog systems) to gain additional performance boost. The rest of this paper is organized as follows: Section 2 details related work. We describe the dataset and setting in Section 3, the proposed model in Section 4, and experimental results in Section 5. We conclude this paper in Section 6. 2 Related Work 2.1 Extracting Entities and Relations Many earlier entity-relation extraction systems are pipelined (Zelenko et al., 2003; Miwa et al., 2009; Chan and Roth, 2011; Lin et al., 2016): an entity extraction model first identifies entities of interest and a relation extraction model then constructs relations between the extracted entities. Although pipelined systems has the flexibility of integrating different data sources and learning algorithms, they suffer significantly from error propagation. To tackle this issue, joint learning models have been proposed. Earlier joint learning approaches connect the two models through various dependen1342 cies, including constraints solved by integer linear programming (Yang and Cardie, 2013; Roth and Yih, 2007), card-pyramid parsing (Kate and Mooney, 2010), and global probabilistic graphical models (Yu and Lam, 2010; Singh et al., 2013). In later studies, Li and Ji (2014) extract entity mentions and relations using structured perceptron with efficient beam-search, which is significantly more efficient and less Time-consuming than constraintbased approaches. Miwa and Sasaki (2014); Gupta et al. (2016); Zhang et al. (2017) proposed the tablefilling approach, which provides an opportunity to incorporating more sophisticated features and algorithms into the model, such as search orders in decoding and global features. Neural network models have been widely used in the literature as well. Miwa and Bansal (2016) introduced an end-to-end approach that extract entities and their relations using neural network models with shared parameters, i.e., extracting entities using a neural tagging model and extracting relations using a neural multiclass classification model based on tree LSTMs (Tai et al., 2015). Wang et al. (2016a) extract relations using multi-level attention CNNs. Zeng et al. (2018) proposed a new framework that uses sequence-to-sequence models to generate entityrelation triples, naturally combining entity detection and relation detection. Another way to bind the entity and the relation extraction models is to use reinforcement learning or Minimum Risk Training, in which the training signals are given based on the joint decision by the two models. Sun et al. (2018) optimized a global loss function to jointly train the two models under the framework work of Minimum Risk Training. Takanobu et al. (2018) used hierarchical reinforcement learning to extract entities and relations in a hierarchical manner. 2.2 Machine Reading Comprehension Main-stream MRC models (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2017; Wang et al., 2016b) extract text spans in passages given queries. Text span extraction can be simplified to two multiclass classification tasks, i.e., predicting the starting and the ending positions of the answer. Similar strategy can be extended to multi-passage MRC (Joshi et al., 2017; Dunn et al., 2017) where the answer needs to be selected from multiple passages. Multi-passage MRC tasks can be easily simplified to single-passage MRC tasks by concatenating passages (Shen et al., 2017; Wang et al., 2017b). Wang et al. (2017a) first rank the passages and then run single-passage MRC on the selected passage. Tan et al. (2017) train the passage ranking model jointly with the reading comprehension model. Pretraining methods like BERT (Devlin et al., 2018) or Elmo (Peters et al., 2018) have proved to be extremely helpful in MRC tasks. There has been a tendency of casting non-QA NLP tasks as QA tasks (McCann et al., 2018). Our work is highly inspired by Levy et al. (2017). Levy et al. (2017) and McCann et al. (2018) focus on identifying the relation between two pre-defined entities and the authors formalize the task of relation extraction as a single-turn QA task. In the current paper we study a more complicated scenario, where hierarchical tag dependency needs to be modeled and single-turn QA approach no longer suffices. We show that our multi-turn QA method is able to solve this challenge and obtain new state-of-the-art results. 3 Datasets and Tasks 3.1 ACE04, ACE05 and CoNLL04 We use ACE04, ACE05 and CoNLL04 (Roth and Yih, 2004), the widely used entity-relation extraction benchmarks for evaluation. ACE04 defines 7 entity types, including Person (PER), Organization (ORG), Geographical Entities (GPE), Location (loc), Facility (FAC), Weapon (WEA) and Vehicle (VEH). For each pair of entities, it defines 7 relation categories, including Physical (PHYS), Person-Social (PER-SOC), EmploymentOrganization (EMP-ORG), Agent-Artifact (ART), PER/ORG Affiliation (OTHER-AFF), GPE- Affiliation (GPE-AFF) and Discourse (DISC). ACE05 was built upon ACE04. It kept the PER-SOC, ART and GPE-AFF categories from ACE04 but split PHYS into PHYS and a new relation category PARTWHOLE. It also deleted DISC and merged EMPORG and OTHER-AFF into a new category EMPORG. As for CoNLL04, it defines four entity types (LOC, ORG, PERand OTHERS) and five relation categories (LOCATED IN, WORK FOR, ORGBASED IN, LIVE IN ]and KILL). For ACE04 and ACE05, we followed the training/dev/test split in Li and Ji (2014) and Miwa and Bansal (2016)3. For the CoNLL04 dataset, we followed Miwa and Sasaki (2014). 3https://github.com/tticoin/LSTM-ER/. 1343 3.2 RESUME: A newly constructed dataset The ACE and the CoNLL-04 datasets are intended for triplet extraction, and two turns of QA is sufficient to extract the triplet (one turn for head-entities and another for joint extraction of tail-entities and relations). These datasets do not involve hierarchical entity relations as in our previous Musk example, which are prevalent in real life applications. Therefore, we construct a new dataset called RESUME. We extracted 841 paragraphs from chapters describing management teams in IPO prospectuses. Each paragraph describes some work history of an executive. We wish to extract the structural data from the resume. The dataset is in Chinese. The following shows an examples: 郑强先生,本公司监事,1973年出生,中 国国籍,无境外永久居留权。1995年,毕业 于南京大学经济管理专业;1995年至1998年, 就职于江苏常州公路运输有限公司,任主办 会计;1998年至2000年,就职于越秀会计师事 务所,任项目经理;2000年至2010年,就职于 国富浩华会计师事务所有限公司广东分所, 历任项目经理、部门经理、合伙人及副主任 会计师;2010年至2011年,就职于广东中科 招商创业投资管理有限责任公司,任副总经 理;2011年至今,任广东中广投资管理有限公 司董事、总经理;2016年至今,任湛江中广创 业投资有限公司董事、总经理;2016年3月至 今,担任本公司监事. Mr. Zheng Qiang, a supervisor of the Company. He was born in 1973. His nationality is Chinese with no permanent residency abroad. He graduated from Nanjing University with a major in economic management in 1995. From 1995 to 1998, he worked for Jiangsu Changzhou Road Transportation Co., Ltd. as an organizer of accounting. From 1998 to 2000, he worked as a project manager in Yuexiu Certified Public Accountants. In 2010, he worked in the Guangdong branch of Guofu Haohua Certified Public Accountants Co., Ltd., and served as a project manager, department manager, partner and deputy chief accountant. From 2010 to 2011, he worked for Guangdong Zhongke Investment Venture Capital Management Co., Ltd. as a deputy general manager; since 2011, he has served as thedirector and general manager of Guangdong Zhongguang Investment Management Co., Ltd.; since 2016, he has served as director and general manager of Zhanjiang Zhongguang Venture Capital Co., Ltd.; since March 2016, he has served as the supervisor of the Company. We identify four types of entities: Person (the Total # Average # per passage Person 961 1.09 Company 1988 2.13 Position 2687 1.33 Time 1275 1.01 Table 2: Statistics for the RESUME dataset. name of the executive), Company (the company that the executive works/worked for), Position (the position that he/she holds/held) and Time (the time period that the executive occupies/occupied that position). It is worth noting that one person can work for different companies during different periods of time and that one person can hold different positions in different periods of time for the same company. We recruited crowdworkers to fill the slots in Table 1. Each passage is labeled by two different crowdworkers. If labels from the two annotators disagree, one or more annotators were asked to label the sentence and a majority vote was taken as the final decision. Since the wording of the text is usually very explicit and formal, the interagreement between annotators is very high, achieving a value of 93.5% for all slots. Some statistics of the dataset are shown in Table 2. We randomly split the dataset into training (80%), validation(10%) and test set (10%). 4 Model 4.1 System Overview The overview of the algorithm is shown in Algorithm 1. The algorithm contains two stages: (1) The head-entity extraction stage (line 4-9): each episode of multi-turn QA is triggered by an entity. To extract this starting entity, we transform each entity type to a question using EntityQuesTemplates (line 4) and the entity e is extracted by answering the question (line 5). If the system outputs the special NONE token, then it means s does not contain any entity of that type. (2) The relation and the tail-entity extraction stage (line 10-24): ChainOfRelTemplates defines a chain of relations, the order of which we need to follow to run multi-turn QA. The reason is that the extraction of some entities depends on the extraction of others. For example, in the RESUME dataset, the position held by an executive relies on the company he works for. Also the extraction of the Time entity relies on the extraction of both the Company and the Position. The extraction order is manually pre-defined. ChainOfRelTemplates also 1344 Relation Type head-e tail-e Natural Language Question & Template Question GEN-AFF FAC GPE find a geo-political entity that connects to XXX XXX; has affiliation; geo-political entity PART-WHOLE FAC FAC find a facility that geographically relates to XXX XXX; part whole; facility PART-WHOLE FAC GPE find a geo-political entity that geographically relates to XXX XXX; part whole; geo-political entity PART-WHOLE FAC VEH find a vehicle that belongs to XXX XXX; part whole; vehicle PHYS FAC FAC find a facility near XXX? XXX; physical; facility ART GPE FAC find a facility which is made by XXX XXX; agent artifact; facility ART GPE VEH find a vehicle which is owned or used by XXX XXX; agent artifact; vehicle ART GPE WEA find a weapon which is owned or used by XXX XXX; agent artifact; weapon ORG-AFF GPE ORG find an organization which is invested by XXX XXX; organization affiliation; organization PART-WHOLE GPE GPE find a geo political entity which is controlled by XXX XXX; part whole; geo-political entity PART-WHOLE GPE LOC find a location geographically related to XXX XXX; part whole; location Table 3: Some of the question templates for different relation types in AEC. Q1 Person: who is mentioned in the text? A: e1 Q2 Company: which companies did e1 work for? A: e2 Q3 Position: what was e1’s position in e2? A: e3 Q4 Time: During which period did e1 work for e2 as e3 A: e4 Table 4: Question templates for the RESUME dataset. defines the template for each relation. Each template contains some slots to be filled. To generate a question (line 14), we insert previously extracted entity/entities to the slot/slots in a template. The relation REL and tail-entity e will be jointly extracted by answering the generated question (line 15). A returned NONE token indicates that there is no answer in the given sentence. It is worth noting that entities extracted from the head-entity extraction stage may not all be head entities. In the subsequent relation and tail-entity extraction stage, extracted entities from the first stage are initially assumed to be head entities, and are fed to the templates to generate questions. If an entity e extracted from the first stage is indeed a head-entity of a relation, then the QA model will extract the tail-entity by answering the corresponding question. Otherwise, the answer will be NONE and thus ignored. For ACE04, ACE05 and CoNLL04 datasets, only two QA turns are needed. ChainOfRelTemplates thus only contain chains of 1. For RESUME, we need to extract 4 entities, so ChainOfRelTemplates contain chains of 3. 4.2 Generating Questions using Templates Each entity type is associated with a type-specific question generated by the templates. There are two ways to generate questions based on templates: natural language questions or pseudo-questions. A pseudo-question is not necessarily grammatical. For example, the natural language question for the Facility type could be Which facility is mentioned in the text, and the pseudo-question could just be entity: facility. At the relation and the tail-entity joint extraction stage, a question is generated by combing a relation-specific template with the extracted headentity. The question could be either a natural language question or a pseudo-question. Examples are shown in Table 3 and Table 4. 4.3 Extracting Answer Spans via MRC Various MRC models have been proposed, such as BiDAF (Seo et al., 2016) and QANet (Yu et al., 2018). In the standard MRC setting, given a question Q = {q1, q2, ..., qNq} where Nq denotes the number of words in Q, and context C = {c1, c2, ..., cNc}, where Nc denotes the num1345 Input: sentence s, EntityQuesTemplates, ChainOfRelTemplates Output: a list of list (table) M = [] 1: 2: M ←∅ 3: HeadEntList←∅ 4: for entity question in EntityQuesTemplates do 5: e1 = Extract Answer(entity question, s) 6: if e1 ̸= NONE do 7: HeadEntList = HeadEntList + {e1} 8: endif 9: end for 10: for head entity in HeadEntList do 11: ent list = [head entity] 12: for [rel, rel temp] in ChainOfRelTemplates do 13: for (rel, rel temp) in List of [rel, rel temp] do 14: q = GenQues(rel temp, rel, ent list) 15: e = Extract Answer(rel question, s) 16: if e ̸= NONE 17: ent list = ent list + e 18: endif 19: end for 20: end for 21: if len(ent list)=len([rel, rel temp]) 22: M = M + ent list 23: endif 24: end for 25: return M Algorithm 1: Transforming the entity-relation extraction task to a multi-turn QA task. ber of words in C, we need to predict the answer span. For the QA framework, we use BERT (Devlin et al., 2018) as a backbone. BERT performs bidirectional language model pretraining on largescale datasets using transformers (Vaswani et al., 2017) and achieves SOTA results on MRC datasets like SQUAD (Rajpurkar et al., 2016). To align with the BERT framework, the question Q and the context C are combined by concatenating the list [CLS, Q, SEP, C, SEP], where CLS and SEP are special tokens, Q is the tokenized question and C is the context. The representation of each context token is obtained using multi-layer transformers. Traditional MRC models (Wang and Jiang, 2016; Xiong et al., 2017) predict the starting and ending indices by applying two softmax layers to the context tokens. This softmax-based span extraction strategy only fits for single-answer extraction tasks, but not for our task, since one sentence/passage in our setting might contain multiple answers. To tackle this issue, we formalize the task as a query-based tagging problem (Lafferty et al., 2001; Huang et al., 2015; Ma and Hovy, 2016). Specially, we predict a BMEO (beginning, inside, ending and outside) label for each token in the context given the query. The representation of each word is fed to a softmax layer to output a BMEO label. One can think that we are transforming two Nclass classification tasks of predicting the starting and the ending indices (where N denotes the length of sentence) to N 5-class classification tasks4. Training and Test At the training time, we jointly train the objectives for the two stages: L = (1 −λ)L(head-entity) + λL(tail-entity, rel) (1) λ ∈[0, 1] is the parameter controling the trade-off between the two objectives. Its value is tuned on the validation set. Both the two models are initialized using the standard BERT model and they share parameters during the training. At test time, headentities and tail-entities are extracted separately based on the two objectives. 4.4 Reinforcement Learning Note that in our setting, the extracted answer from one turn not only affects its own accuracy, but also determines how a question will be constructed for the downstream turns, which in turn affect later accuracies. We decide to use reinforcement learning to tackle it, which has been proved to be successful in multi-turn dialogue generation (Mrkˇsi´c et al., 2015; Li et al., 2016a; Wen et al., 2016), a task that has the same challenge as ours. Action and Policy In a RL setting, we need to define action and policy. In the multi-turn QA setting, the action is selecting a text span in each turn. The policy defines the probability of selecting a certain span given the question and the context. As the algorithm relies on the BMEO tagging output, the probability of selecting a certain span {w1, w2, ..., wn} is the joint probability of w1 being assigned to B (beginning), w2, ..., wn−1 being assigned to M (inside) and wn being assigned to E (end), written as follows: p(y(w1, ..., wn) = answer|question, s) = p(w1 = B) × p(wn = E) Y i∈[2,n−1] p(wi = M) (2) Reward For a given sentence s, we use the number of correctly retrieved triples as rewards. We use the REINFORCE algorithm (Williams, 1992), a kind of policy gradient method, to find the optimal policy, which maximizes the expected reward 4 For some of the relations that we are interested in, their corresponding questions have single answers. We tried the strategy of predicting the starting and the ending index and found the results no different from the ones in the multi-answer QA-based tagging setting. 1346 Eπ[R(w)]. The expectation is approximated by sampling from the policy π and the gradient is computed using the likelihood ratio: ∇E(θ) ≈[R(w) −b]∇log π(y(w)|question s)) (3) where b denotes a baseline value. For each turn in the multi-turn QA setting, getting an answer correct leads to a reward of +1 . The final reward is the accumulative reward of all turns. The baseline value is set to the average of all previous rewards. We do not initialize policy networks from scratch, but use the pre-trained head-entity and tail-entity extraction model described in the previous section. We also use the experience replay strategy (Mnih et al., 2015): for each batch, half of the examples are simulated and the other half is randomly selected from previously generated examples. For the RESUME dataset, we use the strategy of curriculum learning (Bengio et al., 2009), i.e., we gradually increase the number of turns from 2 to 4 at training. 5 Experimental Results 5.1 Results on RESUME Answers are extracted according to the order of Person (first-turn), Company (second-turn), Position (third-turn) and Time (forth-turn), and the extraction of each answer depends on those prior to them. For baselines, we first implement a joint model in which entity extraction and relation extraction are trained together (denoted by tagging+relation). As in Zheng et al. (2017), entities are extracted using BERT tagging models, and relations are extracted by applying a CNN to representations output by BERT transformers. Existing baselines which involve entity and relation identification stages (either pipelined or joint) are well suited for triplet extractions, but not really tailored to our setting because in the third and forth turn, we need more information to decide the relation than just the two entities. For instance, to extract Position, we need both Person and Company, and to extract Time, we need Person, Company and Position. This is akin to a dependency parsing task, but at the tag-level rather than the word-level (Dozat and Manning, 2016; Chen and Manning, 2014). We thus proposed the following baseline, which modifies the previous entity+relation strategy to entity+dependency, denoted by tagging+dependency. We use the BERT tagging model to assign tagging labels to each word, and modify the current SOTA dependency parsing model Biaffine (Dozat and Manning, 2016) to construct dependencies between tags. The Biaffine dependency model and the entity-extraction model are jointly trained. Results are presented in Table 5. As can be seen, the tagging+dependency model outperforms the tagging+relation model. The proposed multiturn QA model performs the best, with RL adding additional performance boost. Specially, for Person extraction, which only requires single-turn QA, the multi-turn QA+RL model performs the same as the multi-turn QA model. It is also the case in tagging+relation and tagging+dependency. 5.2 Results on ACE04, ACE05 and CoNLL04 For ACE04, ACE05 and CoNLL04, only two turns of QA are required. For evaluation, we report micro-F1 scores, precision and recall on entities and relations (Tables 6, 7 and 8) as in Li and Ji (2014); Miwa and Bansal (2016); Katiyar and Cardie (2017); Zhang et al. (2017). For ACE04, the proposed multi-turn QA model already outperforms previous SOTA by +1.8% for entity extraction and +1.0% for relation extraction. For ACE05, the proposed multi-turn QA model outperforms previous SOTA by +1.2% for entity extraction and +0.6% for relation extraction. The proposed multiturn QA model leads to a +2.2% improvement on entity F1 and +1.1% on relation F1. 6 Ablation Studies 6.1 Effect of Question Generation Strategy In this subsection, we compare the effects of natural language questions and pseudo-questions. Results are shown in Table 9. We can see that natural language questions lead to a strict F1 improvement across all datasets. This is because natural language questions provide more fine-grained semantic information and can help entity/relation extraction. By contrast, the pseudoquestions provide very coarse-grained, ambiguous and implicit hints of entity and relation types, which might even confuse the model. 6.2 Effect of Joint Training In this paper, we decompose the entity-relation extraction task into two subtasks: a multi-answer task for head-entity extraction and a single-answer task for joint relation and tail-entity extraction. We jointly train two models with parameters shared. 1347 multi-turn QA multi-turn QA+RL tagging+dependency tagging+relation p r f p r f p r f p r f Person 98.1 99.0 98.6 98.1 99.0 98.6 97.0 97.2 97.1 97.0 97.2 97.1 Company 82.3 87.6 84.9 83.3 87.8 85.5 81.4 87.3 84.2 81.0 86.2 83.5 Position 97.1 98.5 97.8 97.3 98.9 98.1 96.3 98.0 97.0 94.4 97.8 96.0 Time 96.6 98.8 97.7 97.0 98.9 97.9 95.2 96.3 95.7 94.0 95.9 94.9 all 91.0 93.2 92.1 91.6 93.5 92.5 90.0 91.7 90.8 88.2 91.5 89.8 Table 5: Results for different models on the RESUME dataset. Models Entity P Entity R Entity F Relation P Relation R Relation F Li and Ji (2014) 83.5 76.2 79.7 60.8 36.1 49.3 Miwa and Bansal (2016) 80.8 82.9 81.8 48.7 48.1 48.4 Katiyar and Cardie (2017) 81.2 78.1 79.6 46.4 45.3 45.7 Bekoulis et al. (2018) 81.6 47.5 Multi-turn QA 84.4 82.9 83.6 50.1 48.7 49.4 (+1.0) Table 6: Results of different models on the ACE04 test set. Results for pipelined methods are omitted since they consistently underperform joint models (see Li and Ji (2014) for details). The parameter λ control the tradeoff between the two subtasks: L = (1−λ)L(head-entity)+λL(tail-entity) (4) Results regarding different values of λ on the ACE05 dataset are given as follows: λ Entity F1 Relation F1 λ = 0 85.0 55.1 λ = 0.1 84.8 55.4 λ = 0.2 85.2 56.2 λ = 0.3 84.8 56.4 λ = 0.4 84.6 57.9 λ = 0.5 84.8 58.3 λ = 0.6 84.6 58.9 λ = 0.7 84.8 60.2 λ = 0.8 83.9 58.7 λ = 0.9 82.7 58.3 λ = 1.0 81.9 57.8 When λ is set to 0, the system is essentially only trained on the head-entity prediction task. It is interesting to see that λ = 0 does not lead to the best entity-extraction performance. This demonstrates that the second-stage relation extraction actually helps the first-stage entity extraction, which again confirms the necessity of considering these two subtasks together. For the relation extraction task, the best performance is obtained when λ is set to 0.7. 6.3 Case Study Table 10 compares outputs from the proposed multiturn QA model with the ones of the previous SOTA MRT model (Sun et al., 2018). In the first example, MRT is not able to identify the relation between john scottsdale and iraq because the two entities are too far away, but our proposed QA model is able to handle this issue. In the second example, the sentence contains two pairs of the same relation. The MRT model has a hard time identifying handling this situation, not able to locate the ship entity and the associative relation, which the multi-turn QA model is able to handle this case. 7 Conclusion In this paper, we propose a multi-turn question answering paradigm for the task of entity-relation extraction. We achieve new state-of-the-art results on 3 benchmark datasets. We also construct a new entity-relation extraction dataset that requires hierarchical relation reasoning and the proposed model achieves the best performance. References Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Adversarial training for multi-context joint entity and relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2830–2836. Association for Computational Linguistics. Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48. ACM. Yee Seng Chan and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 551– 560. Association for Computational Linguistics. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 conference on 1348 Models Entity P Entity R Entity F Relation P Relation R Relation F Li and Ji (2014) 85.2 76.9 80.8 65.4 39.8 49.5 Miwa and Bansal (2016) 82.9 83.9 83.4 57.2 54.0 55.6 Katiyar and Cardie (2017) 84.0 81.3 82.6 55.5 51.8 53.6 Zhang et al. (2017) 83.5 57.5 Sun et al. (2018) 83.9 83.2 83.6 64.9 55.1 59.6 Multi-turn QA 84.7 84.9 84.8 64.8 56.2 60.2 (+0.6) Table 7: Results of different models on the ACE05 test set. Results for pipelined methods are omitted since they consistently underperform joint models (see Li and Ji (2014) for details). Models Entity P Entity R Entity F1 Relation P Relation R Relation F Miwa and Sasaki (2014) – – 80.7 – – 61.0 Zhang et al. (2017) – – 85.6 – – 67.8 Bekoulis et al. (2018) – – 83.6 – – 62.0 Multi-turn QA 89.0 86.6 87.8 69.2 68.2 68.9 (+2.1) Table 8: Comparison of the proposed method with the previous models on the CoNLL04 dataset. Precision and recall values of baseline models were not reported in the previous papers. RESUME Model Overall P Overall R Overall F Pseudo Q 90.2 92.3 91.2 Natural Q 91.0 93.2 92.1 ACE04 Model EP ER EF RP RR RF Pseudo Q 83.7 81.3 82.5 49.4 47.2 48.3 Natural Q 84.4 82.9 83.6 50.1 48.7 49.9 ACE05 Model EP ER EF RP RR RF Pseudo Q 83.6 84.7 84.2 60.4 55.9 58.1 Natural Q 84.7 84.9 84.8 64.8 56.2 60.2 CoNLL04 Model EP ER EF RP RR RF Pseudo Q 87.4 86.4 86.9 68.2 67.4 67.8 Natural Q 89.0 86.6 87.8 69.6 68.2 68.9 Table 9: Comparing of the effect of natural language questions with pseudo-questions. EXAMPLE1 [john scottsdale] PER: PHYS-1 is on the front lines in [iraq]GPE: PHYS-1 . MRT [john scottsdale] PER is on the front lines in [iraq]GPE . MULTI-QA [john scottsdale] PER: PHYS-1 is on the front lines in [iraq]GPE: PHYS-1 . EXAMPLE2 The [men] PER: ART-1 held on the sinking [vessel] VEH: ART-1 until the [passenger] PER: ART-2 [ship] VEH: ART-2 was able to reach them. MRT The [men] PER: ART-1 held on the sinking [vessel] VEH: ART-1 until the [passenger]PER ship was able to reach them. MULTI-QA The [men] PER: ART-1 held on the sinking [vessel] VEH: ART-1 until the [passenger] PER: ART-2 [ship] VEH: ART-2 was able to reach them. Table 10: Comparing the multi-turn QA model with MRT (Sun et al., 2018). empirical methods in natural language processing (EMNLP), pages 740–750. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Timothy Dozat and Christopher D Manning. 2016. Deep biaffine attention for neural dependency parsing. arXiv preprint arXiv:1611.01734. Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179. Pankaj Gupta, Hinrich Sch¨utze, and Bernt Andrassy. 2016. Table filling multi-task recurrent neural network for joint entity and relation extraction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2537–2547. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551. Rohit J Kate and Raymond J Mooney. 2010. Joint entity and relation extraction using card-pyramid parsing. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 203–212. Association for Computational Linguistics. Arzoo Katiyar and Claire Cardie. 2017. Going out on a limb: Joint extraction of entity mentions and relations without dependency trees. In Proceedings 1349 of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 917–928. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Oliver Lemon, Kallirroi Georgila, James Henderson, and Matthew Stuttle. 2006. An isu dialogue system exhibiting reinforcement learning of dialogue policies: generic slot-filling in the talk in-car system. In Proceedings of the Eleventh Conference of the European Chapter of the Association for Computational Linguistics: Posters & Demonstrations, pages 119– 122. Association for Computational Linguistics. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. arXiv preprint arXiv:1706.04115. Jiwei Li, Alexander H Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. 2016a. Dialogue learning with human-in-the-loop. arXiv preprint arXiv:1611.09823. Jiwei Li, Alexander H Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. 2016b. Learning through dialogue interactions by asking questions. arXiv preprint arXiv:1612.04936. Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 402–412. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2124–2133. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. arXiv preprint arXiv:1603.01354. Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. arXiv preprint arXiv:1601.00770. Makoto Miwa, Rune Sætre, Yusuke Miyao, and Jun’ichi Tsujii. 2009. A rich feature vector for protein-protein interaction extraction from multiple corpora. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 121–130. Association for Computational Linguistics. Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1858–1869. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. Nature, 518(7540):529. Nikola Mrkˇsi´c, Diarmuid O S´eaghdha, Blaise Thomson, Milica Gaˇsi´c, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2015. Multidomain dialog state tracking using recurrent neural networks. arXiv preprint arXiv:1506.07190. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. Technical report, ILLINOIS UNIV AT URBANA-CHAMPAIGN DEPT OF COMPUTER SCIENCE. Dan Roth and Wen-tau Yih. 2007. Global inference for entity and relation identification via a linear programming formulation. Introduction to statistical relational learning, pages 553–580. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2017. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1047–1055. ACM. Sameer Singh, Sebastian Riedel, Brian Martin, Jiaping Zheng, and Andrew McCallum. 2013. Joint inference of entities, relations, and coreference. In Proceedings of the 2013 workshop on Automated knowledge base construction, pages 1–6. ACM. 1350 Changzhi Sun, Yuanbin Wu, Man Lan, Shiliang Sun, Wenting Wang, Kuang-Chih Lee, and Kewen Wu. 2018. Extracting entities and relations with joint minimum risk training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2256–2265. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075. Ryuichi Takanobu, Tianyang Zhang, Jiexi Liu, and Minlie Huang. 2018. A hierarchical framework for relation extraction with reinforcement learning. arXiv preprint arXiv:1811.03925. Chuanqi Tan, Furu Wei, Nan Yang, Bowen Du, Weifeng Lv, and Ming Zhou. 2017. S-net: From answer extraction to answer generation for machine reading comprehension. arXiv preprint arXiv:1706.04815. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016a. Relation classification via multi-level attention cnns. Shuohang Wang and Jing Jiang. 2016. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905. Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Campbell. 2017a. Evidence aggregation for answer re-ranking in open-domain question answering. arXiv preprint arXiv:1711.05116. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017b. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 189–198. Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016b. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211. Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2016. A networkbased end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562. Jason D Williams and Steve Young. 2005. Scaling up pomdps for dialog management: The“summary pomdp”method. In IEEE Workshop on Automatic Speech Recognition and Understanding, 2005., pages 177–182. IEEE. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dcn+: Mixed objective and deep residual coattention for question answering. arXiv preprint arXiv:1711.00106. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1640–1649. Hai Ye, Wenhan Chao, Zhunchen Luo, and Zhoujun Li. 2016. Jointly extracting relations with class ties via effective deep ranking. arXiv preprint arXiv:1612.07602. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541. Xiaofeng Yu and Wai Lam. 2010. Jointly identifying entities and extracting relations in encyclopedia text via a graphical model approach. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1399–1407. Association for Computational Linguistics. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Journal of machine learning research, 3(Feb):1083–1106. Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018. Extracting relational facts by an end-to-end neural model with copy mechanism. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 506–514. Meishan Zhang, Yue Zhang, and Guohong Fu. 2017. End-to-end neural relation extraction with global optimization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1730–1740. Suncong Zheng, Yuexing Hao, Dongyuan Lu, Hongyun Bao, Jiaming Xu, Hongwei Hao, and Bo Xu. 2017. Joint entity and relation extraction based on a hybrid neural network. Neurocomputing, 257:59–66.
2019
129
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 129–139 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 129 Automatic Generation of High Quality CCGbanks for Parser Domain Adaptation Masashi Yoshikawa1 yoshikawa.masashi.yh8@ is.naist.jp Hiroshi Noji2 [email protected] Koji Mineshima3 [email protected] Daisuke Bekki3 [email protected] 1Nara Institute of Science and Technology, Nara, Japan 2Artificial Intelligence Research Center, AIST, Tokyo, Japan 3Ochanomizu University, Tokyo, Japan Abstract We propose a new domain adaptation method for Combinatory Categorial Grammar (CCG) parsing, based on the idea of automatic generation of CCG corpora exploiting cheaper resources of dependency trees. Our solution is conceptually simple, and not relying on a specific parser architecture, making it applicable to the current best-performing parsers. We conduct extensive parsing experiments with detailed discussion; on top of existing benchmark datasets on (1) biomedical texts and (2) question sentences, we create experimental datasets of (3) speech conversation and (4) math problems. When applied to the proposed method, an off-the-shelf CCG parser shows significant performance gains, improving from 90.7% to 96.6% on speech conversation, and from 88.5% to 96.8% on math problems. 1 Introduction The recent advancement of Combinatory Categorial Grammar (CCG; Steedman (2000)) parsing (Lee et al., 2016; Yoshikawa et al., 2017), combined with formal semantics, has enabled high-performing natural language inference systems (Abzianidze, 2017; Mart´ınez-G´omez et al., 2017). We are interested in transferring the success to a range of applications, e.g., inference systems on scientific papers and speech conversation. To achieve the goal, it is urgent to enhance the CCG parsing accuracy on new domains, i.e., solving a notorious problem of domain adaptation of a statistical parser, which has long been addressed in the literature. Especially in CCG parsing, prior work (Rimell and Clark, 2008; Lewis et al., 2016) has taken advantage of highly informative categories, which determine the most part of sentence structure once correctly assigned to words. It is demonstrated that the annotation of only preterminal categories is sufficient to adapt a CCG parser to new domains. However, the solution is limited to a specific parser’s architecture, making non-trivial the application of the method to the current state-of-the-art parsers (Lee et al., 2016; Yoshikawa et al., 2017; Stanojevi´c and Steedman, 2019), which require full parse annotation. Additionally, some ambiguities remain unresolved with mere supertags, especially in languages other than English (as discussed in Yoshikawa et al. (2017)), to which the method is not portable. Distributional embeddings are proven to be powerful tools for solving the issue of domain adaption, with their unlimited applications in NLP, not to mention syntactic parsing (Lewis and Steedman, 2014b; Mitchell and Steedman, 2015; Peters et al., 2018). Among others, Joshi et al. (2018) reports huge performance boosts in constituency parsing using contextualized word embeddings (Peters et al., 2018), which is orthogonal to our work, and the combination shows huge gains. Including Joshi et al. (2018), there are studies to learn from partially annotated trees (Mirroshandel and Nasr, 2011; Li et al., 2016; Joshi et al., 2018), again, most of which exploit specific parser architecture. In this work, we propose a conceptually simpler approach to the issue, which is agnostic on any parser architecture, namely, automatic generation of CCGbanks (i.e., CCG treebanks)1 for new domains, by exploiting cheaper resources of dependency trees. Specifically, we train a deep conversion model to map a dependency tree to a CCG tree, on aligned annotations of the Penn Treebank (Marcus et al., 1993) and the English CCGbank (Hockenmaier and Steedman, 2007) (Figure 1a). When we need a CCG parser tailored for 1In this paper, we call a treebank based on CCG grammar a CCGbank, and refer to the specific one constructed in Hockenmaier and Steedman (2007) as the English CCGbank. 130 Trained Converter the government reported that ... det nsubj ... mark the gove NP/N reported government that ... Bidirectional TreeLSTM (Miwa et al.,2016) i Vector encodings Dependency tree CCG tree (a) Training the converter A* parsing decoder Circadian rhythm in glucocorticoid ... amod nmod case ... Genia Dep. Corpus Genia CCG Corpus (b) Using the trained converter root root (c) Fine-tune a CCG parser N/N N/N N N N the NP/N NP government reported that N NP/S S ... S NP S\NP > > > < (S\NP)/NP Figure 1: Overview of the proposed method. (a) A neural network-based model is trained to convert a dependency tree to a CCG one using aligned annotations on WSJ part of the Penn Treebank and the English CCGbank. (b) The trained converter is applied to an existing dependency corpus (e.g., the Genia corpus) to generate a CCGbank, (c) which is then used to fine-tune the parameters of an off-the-shelf CCG parser. a new domain, the trained converter is applied to a dependency corpus in that domain to obtain a new CCGbank (1b), which is then used to fine-tune an off-the-shelf CCG parser (1c). The assumption that we have a dependency corpus in that target domain is not demanding given the abundance of existing dependency resources along with its developed annotation procedure, e.g., Universal Dependencies (UD) project (Nivre et al., 2016), and the cheaper cost to train an annotator. One of the biggest bottlenecks of syntactic parsing is handling of countless unknown words. It is also true that there exist such unfamiliar input data types to our converter, e.g., disfluencies in speech and symbols in math problems. We address these issues by constrained decoding (§4), enabled by incorporating a parsing technique into our converter. Nevertheless, syntactic structures exhibit less variance across textual domains than words do; our proposed converter suffers less from such unseen events, and expectedly produces high-quality CCGbanks. The work closest to ours is Jiang et al. (2018), where a conversion model is trained to map dependency treebanks of different annotation principles, which is used to increase the amount of labeled data in the target-side treebank. Our work extends theirs and solves a more challenging task; the mapping to learn is to more complex CCG trees, and it is applied to datasets coming from plainly different natures (i.e., domains). Some prior studies design conversion algorithms to induce CCGbanks for languages other than English from dependency treebanks (Bos et al., 2009; Ambati et al., 2013). Though the methods may be applied to our problem, they usually cannot cover the entire dataset, consequently discarding sentences with characteristic features. On top of that, unavoidable information gaps between the two syntactic formalisms may at most be addressed probabilistically. To verify the generalizability of our approach, on top of the existing benchmarks on (1) biomedical texts and (2) question sentences (Rimell and Clark, 2008), we conduct parsing experiments on (3) speech conversation texts, which exhibit other challenges such as handling informal expressions and lengthy sentences. We create a CCG version of the Switchboard corpus (Godfrey et al., 1992), consisting of full train/dev/test sets of automatically generated trees and manually annotated 100 sentences for a detailed evaluation. Additionally, we manually construct experimental data for parsing (4) math problems (Seo et al., 2015), for which the importance of domain adaptation is previously demonstrated (Joshi et al., 2018). We observe huge additive gains in the performance of the depccg parser (Yoshikawa et al., 2017), by combining contextualized word embeddings (Peters et al., 2018) and our domain adaptation method: in terms of unlabeled F1 scores, 90.68% to 95.63% on speech conversation, and 88.49% to 95.83% on math problems, respectively.2 2All the programs and resources used in this work are available at: https://github.com/masashi-y/ depccg. 131 cats Ncats NPcats un that (NPx\NPx)/(S/NPx) Kyle NPkyle Sy/(Sy\NPkyle) T wants (Swants\NPz,1)/(Sw,2\NPz) to (Su\NPv)/(Su\NPv) see (Ssee\NPs,1)/NPt,2 (Ssee\NPv)/NPt : u = see, v = s >B (Swants\NPz)/NPt : w = see, z = v >B Sy/NPt : y = wants, z = kyle >B NPx\NPx : x = t > NPcats : x = cats < Figure 2: Example CCG derivation tree for phrase cats that Kyle wants to see. Categories are combined using rules such as an application rule (marked with “>”, X/Y Y ⇒X) and a composition rule (“>B”: X/Y Y/Z ⇒X/Z). See Steedman (2000) for the detail. 2 Combinatory Categorial Grammar CCG is a lexicalized grammatical formalism, where words and phrases are assigned categories with complex internal structures. A category X/Y (or X\Y) represents a phrase that combines with a Y phrase on its right (or left), and becomes an X phrase. As such, a category (S\NP)/NP represents an English transitive verb which takes NPs on both sides and becomes a sentence (S). The semantic structure of a sentence can be extracted using the functional nature of CCG categories. Figure 2 shows an example CCG derivation of a phrase cats that Kyle wants to see, where categories are marked with variables and constants (e.g., kyle in NPkyle), and argument ids in the case of verbs (subscripts in (Ssee\NPs,1)/NPt,2). Unification is performed on these variables and constants in the course of derivation, resulting in chains of equations s = v = z = kyle, and t = x = cats, successfully recovering the first and second argument of see: Kyle and cats (i.e., capturing long-range dependencies). What is demonstrated here is performed in the standard evaluation of CCG parsing, where the number of such correctly predicted predicate-argument relations is calculated (for the detail, see Clark et al. (2002)). Remarkably, it is also the basis of CCG-based semantic parsing (Abzianidze, 2017; Mart´ınez-G´omez et al., 2017; Matsuzaki et al., 2017), where the above simple unification rule is replaced with more sophisticated techniques such as λ-calculus. There are two major resources in CCG: the English CCGbank (Hockenmaier and Steedman, 2007) for news texts, and the Groningen Meaning Bank (Bos et al., 2017) for wider domains, including Aesop’s fables. However, when one wants a CCG parser tuned for a specific domain, he or she faces the issue of its high annotation cost: • The annotation requires linguistic expertise, being able to keep track of semantic composition performed during a derivation. • An annotated tree must strictly conform to the grammar, e.g., inconsistencies such as combining N and S\NP result in ill-formed trees and hence must be disallowed. We relax these assumptions by using dependency tree, which is a simpler representation of the syntactic structure, i.e., it lacks information of longrange dependencies and conjunct spans of a coordination structure. However, due to its simplicity and flexibility, it is easier to train an annotator, and there exist plenty of accessible dependency-based resources, which we exploit in this work. 3 Dependency-to-CCG Converter We propose a domain adaptation method based on the automatic generation of a CCGbank out of a dependency treebank in the target domain. This is achieved by our dependency-to-CCG converter, a neural network model consisting of a dependency tree encoder and a CCG tree decoder. In the encoder, higher-order interactions among dependency edges are modeled with a bidirectional TreeLSTM (Miwa and Bansal, 2016), which is important to facilitate mapping from a dependency tree to a more complex CCG tree. Due to the strict nature of CCG grammar, we model the output space of CCG trees explicitly3; our decoder is inspired by the recent success of A* CCG parsing (Lewis and Steedman, 2014a; Yoshikawa et al., 2017), where the most probable valid tree is found using A* parsing (Klein and D. Manning, 2003). In the following, we describe the details of the proposed converter. 3The strictness and the large number of categories make it still hard to leave everything to neural networks to learn. We trained constituency-based RSP parser (Joshi et al., 2018) on the English CCGbank by disguising the trees as constituency ones, whose performance could not be evaluated since most of the output trees violated the grammar. 132 Firstly, we define a probabilistic model of the dependency-to-CCG conversion process. According to Yoshikawa et al. (2017), the structure of a CCG tree y for sentence x = (x1, ..., xN) is almost uniquely determined4 if a sequence of the pre-terminal CCG categories (supertags) c = (c1, ..., cN) and a dependency structure d = (d1, ..., dN), where di ∈{0, ..., N} is an index of dependency parent of xi (0 represents a root node), are provided. Note that the dependency structure d is generally different from an input dependency tree.5 While supertags are highly informative about the syntactic structure (Bangalore and Joshi, 1999), remaining ambiguities such as attachment ambiguities need to be modeled using dependencies. Let the input dependency tree of sentence x be z = (p, d′, ℓ), where pi is a part-of-speech tag of xi, d′ i an index of its dependency parent, ℓi is the label of the corresponding dependency edge, then the conversion process is expressed as follows:6 P(y|x, z) = N Y i=1 ptag(ci|x, z) N Y i=1 pdep(di|x, z). Based on this formulation, we model ci and di conditioned on a dependency tree z, and search for y that maximizes P(y|x, z) using A* parsing. Encoder A bidirectional TreeLSTM consists of two distinct TreeLSTMs (Tai et al., 2015). A bottom-up TreeLSTM recursively computes a hidden vector h↑ i for each xi, from vector representation ei of the word and hidden vectors of its dependency children {h↑ j|d′ j = i}. A top-down TreeLSTM, in turn, computes h↓ i using ei and a hidden vector of the dependency parent h↓ d′ i. In total, a bidirectional TreeLSTM returns concatenations of hidden vectors for all words: hi = h↑ i ⊕h↓ i . We encode a dependency tree as follows, where ev denotes the vector representation of variable v, and Ωand Ξd′ are shorthand notations of the series of operations of sequential and tree bidirectional LSTMs, respectively: e1, ..., eN = Ω(ep1 ⊕ex1, ..., epN ⊕exN ), h1, ..., hN = Ξd′(e1 ⊕eℓ1, ..., eN ⊕eℓN ). 4The uniqueness is broken if a tree contains a unary node. 5In this work, input dependency tree is based on Universal Dependencies (Nivre et al., 2016), while dependency structure d of a CCG tree is Head First dependency tree introduced in Yoshikawa et al. (2017). See § 5 for the detail. 6Here, the independence of each cis and dis is assumed. Decoder The decoder part adopts the same architecture as in Yoshikawa et al. (2017), where pdep|tag probabilities are computed on top of {hi}i∈[0,N], using a biaffine layer (Dozat and Manning, 2017) and a bilinear layer, respectively, which are then used in A* parsing to find the most probable CCG tree. Firstly a biaffine layer is used to compute unigram head probabilities pdep as follows: ri = ψdep child(hi), rj = ψdep head(hj), si,j = rT i Wrj + wTrj, pdep(di = j|x, z) ∝exp(si,j), where ψ denotes a multi-layer perceptron. The probabilities ptag are computed by a bilinear transformation of vector encodings xi and x ˆdi, where ˆdi is the most probable dependency head of xi with respect to pdep: ˆdi = arg maxj pdep(di = j|x, z). qi = ψtag child(hi), q ˆdi = ψtag head(h ˆdi), si,c = qT i Wcq ˆdi + vT c qi + uT c q ˆdi + bc, ptag(ci = c|x, z) ∝exp(si,c). A* Parsing Since the probability P(y|x, z) of a CCG tree y is simply decomposable into probabilities of subtrees, the problem of finding the most probable tree can be solved with a chart-based algorithm. In this work, we use one of such algorithms, A* parsing (Klein and D. Manning, 2003). A* parsing is a generalization of A* search for shortest path problem on a graph, and it controls subtrees (corresponding to a node in a graph case) to visit next using a priority queue. We follow Yoshikawa et al. (2017) exactly in formulating our A* parsing, and adopt an admissible heuristic by taking the sum of the max ptag|dep probabilities outside a subtree. The advantage of employing an A* parsing-based decoder is not limited to the optimality guarantee of the decoded tree; it enables constrained decoding, which is described next. 4 Constrained Decoding While our method is a fully automated treebank generation method, there are often cases where we want to control the form of output trees by using external language resources. For example, when generating a CCGbank for biomedical domain, it will be convenient if a disease dictionary is utilized to ensure that a complex disease name in a text is always assigned the category NP. In our 133 decoder based on A* parsing, it is possible to perform such a controlled generation of a CCG tree by imposing constraints on the space of trees. A constraint is a triplet (c, i, j) representing a constituent of category c spanning over words xi, ..., xj. The constrained decoding is achieved by refusing to add a subtree (denoted as (c′, k, l), likewise, with its category and span) to the priority queue when it meets one of the conditions: • The spans overlap: i < k ≤j < l or k < i ≤ l < j. • The spans are identical (i = k and j = l), while the categories are different (c ̸= c′) and no category c′′ exists such that c′ ⇒c′′ is a valid unary rule. The last condition on unary rule is necessary to prevent structures such as (NP (N dog)) from being accidentally discarded, when using a constraint to make a noun phrase to be NP. A set of multiple constraints are imposed by checking the above conditions for each of the constraints when adding a new item to the priority queue. When one wants to constrain a terminal category to be c, that is achieved by manipulating ptag: ptag(c|x, z) = 1 and for all categories c′ ̸= c, ptag(c′|x, z) = 0. 5 Experiments 5.1 Experimental Settings We evaluate our method in terms of performance gain obtained by fine-tuning an off-the-shelf CCG parser depccg (Yoshikawa et al., 2017), on a variety of CCGbanks obtained by converting existing dependency resources using the method. In short, the method of depccg is equivalent to omitting the dependence on a dependency tree z from P(y|x, z) of our converter model, and running an A* parsing-based decoder on ptag|dep calculated on h1, ..., hN = Ω(ex1, ..., exN ), as in our method. In the plain depccg, the word representation exi is a concatenation of GloVe7 vectors and vector representations of affixes. As in the previous work, the parser is trained on both the English CCGbank (Hockenmaier and Steedman, 2007) and the tri-training dataset by Yoshikawa et al. (2017). In this work, on top of that, we include as a baseline a setting where the affix vectors 7https://nlp.stanford.edu/projects/ glove/ Method UF1 LF1 depccg 94.0 88.8 + ELMo 94.98 90.51 Converter 96.48 92.68 Table 1: The performance of baseline CCG parsers and the proposed converter on WSJ23, where UF1 and LF1 represents unlabeled and labeled F1, respectively. are replaced by contextualized word representation (ELMo; Peters et al. (2018)) (exi = xGloV e xi ⊕ xELMo xi ),8 which we find marks the current best scores in the English CCGbank parsing (Table 1). The evaluation is based on the standard evaluation metric, where the number of correctly predicted predicate argument relations is calculated (§2), where labeled metrics take into account the category through which the dependency is constructed, while unlabeled ones do not. Implementation Details The input word representations to the converter are the concatenation of GloVe and ELMo representations. Each of epi and eℓi is randomly initialized 50-dimensional vectors, and the two-layer sequential LSTMs Ω outputs 300 dimensional vectors, as well as bidirectional TreeLSTM Ξd′, whose outputs are then fed into 1-layer 100-dimensional MLPs with ELU non-linearity (Clevert et al., 2016). The training is done by minimizing the sum of negative log likelihood of ptag|dep using the Adam optimizer (with β1 = β2 = 0.9), on a dataset detailed below. Data Processing In this work, the input tree to the converter follows Universal Dependencies (UD) v1 (Nivre et al., 2016). Constituency-based treebanks are converted using the Stanford Converter9 to obtain UD trees. The output dependency structure d follows Head First dependency tree (Yoshikawa et al., 2017), where a dependency arc is always from left to right. The conversion model is trained to map UD trees in the Wall Street Journal (WSJ) portion 2-21 of the Penn Treebank (Marcus et al., 1993) to its corresponding CCG trees in the English CCGbank (Hockenmaier and Steedman, 2007). 8We used the “original” ELMo model, with 1,024dimensional word vector outputs (https://allennlp. org/elmo). 9https://nlp.stanford.edu/software/ stanford-dependencies.shtml. We used the version 3.9.1. 134 Relation Parser Converter # (a) PPs attaching to NP / VP (NP\NP)/NP 90.62 97.46 2,561 (S\NP)\(S\NP))/NP 81.15 88.63 1,074 (b) Subject / object relative clauses (NP\NP)/(Sdcl\NP) 93.44 98.71 307 (NP\NP)/(Sdcl/NP) 90.48 93.02 20 Table 2: Per-relation F1 scores of the proposed converter and depccg + ELMo (Parser). “#” column shows the number of occurrence of the phenomenon. Fine-tuning the CCG Parser In each of the following domain adaptation experiments, newly obtained CCGbanks are used to fine-tune the parameters of the baseline parser described above, by re-training it on the mixture of labeled examples from the new target-domain CCGbank, the English CCGbank, and the tri-training dataset. 5.2 Evaluating Converter’s Performance First, we examine whether the trained converter can produce high-quality CCG trees, by applying it to dependency trees in the test portion (WSJ23) of Penn Treebank and then calculating the standard evaluation metrics between the resulting trees and the corresponding gold trees (Table 1). This can be regarded as evaluating the upper bound of the conversion quality, since the evaluated data comes from the same domain as the converter’s training data. Our converter shows much higher scores compared to the current best-performing depccg combined with ELMo (1.5% and 2.17% up in unlabeled/labeled F1 scores), suggesting that, using the proposed converter, we can obtain CCGbanks of high quality. Inspecting the details, the improvement is observed across the board (Table 2); the converter precisely handles PP-attachment (2a), notoriously hard parsing problem, by utilizing input’s pobj dependency edges, as well as relative clauses (2b), one of well-known sources of long-range dependencies, for which the converter has to learn from the non-local combinations of edges, their labels and part-of-speech tags surrounding the phenomenon. 5.3 Biomedical Domain and Questions Previous work (Rimell and Clark, 2008) provides CCG parsing benchmark datasets in biomedical texts and question sentences, each representing two contrasting challenges for a newswire-trained parser, i.e., a large amount of out-of-vocabulary Method P R F1 C&C 77.8 71.4 74.5 EasySRL 81.8 82.6 82.2 depccg 83.11 82.63 82.87 + ELMo 85.87 85.34 85.61 + GENIA1000 85.45 84.49 84.97 + Proposed 86.90 86.14 86.52 Table 3: Results on the biomedical domain dataset (§5.3). P and R represent precision and recall, respectively. The scores of C&C and EasySRL fine-tuned on the GENIA1000 is included for comparison (excerpted from Lewis et al. (2016)). Method P R F1 C&C 86.8 EasySRL 88.2 87.9 88.0 depccg 90.42 90.15 90.29 + ELMo 90.55 89.86 90.21 + Proposed 90.27 89.97 90.12 Table 4: Results on question sentences (§5.3). All of baseline C&C, EasySRL and depccg parsers are retrained on Questions data. words (biomedical texts), and rare or even unseen grammatical constructions (questions). Since the work also provides small training datasets for each domain, we utilize them as well: GENIA1000 with 1,000 sentences and Questions with 1,328 sentences, both annotated with pre-terminal CCG categories. Since pre-terminal categories are not sufficient to train depccg, we automatically annotate Head First dependencies using RBG parser (Lei et al., 2014), trained to produce this type of trees (We follow Yoshikawa et al. (2017)’s tri-training setup). Following the previous work, the evaluation is based on the Stanford grammatical relations (GR; Marneffe et al. (2006)), a deep syntactic representation that can be recovered from a CCG tree.10 Biomedical Domain By converting the Genia corpus (Tateisi et al., 2005), we obtain a new CCGbank of 4,432 sentences from biomedical papers annotated with CCG trees. During the process, we have successfully assigned the category NP to all the occurrences of complex biomedical terms by imposing constraints (§4) that NP spans in the original corpus be assigned the category NP in the resulting CCG trees as well. 10We used their public script (https://www.cl. cam.ac.uk/˜sc609/candc-1.00.html). 135 Table 3 shows the results of the parsing experiment, where the scores of previous work (C&C (Clark and Curran, 2007) and EasySRL (Lewis et al., 2016)) are included for reference. The plain depccg already achieves higher scores than these methods, and boosts when combined with ELMo (improvement of 2.73 points in terms of F1). Fine-tuning the parser on GENIA1000 results in a mixed result, with slightly lower scores. This is presumably because the automatically annotated Head First dependencies are not accurate. Finally, by fine-tuning on the Genia CCGbank, we observe another improvement, resulting in the highest 86.52 F1 score. Questions In this experiment, we obtain a CCG version of the QuestionBank (Judge et al., 2006), consisting of 3,622 question sentences, excluding ones contained in the evaluation data. Table 4 compares the performance of depccg fine-tuned on the QuestionBank, along with other baselines. Contrary to our expectation, the plain depccg retrained on Questions data performs the best, with neither ELMo nor the proposed method taking any effect. We hypothesize that, since the evaluation set contains sentences with similar constructions, the contributions of the latter two methods are less observable on top of Questions data. Inspection of the output trees reveals that this is actually the case; the majority of differences among parser’s configurations are irrelevant to question constructions, suggesting that the models capture well the syntax of question in the data.11 5.4 Speech Conversation Setup We apply the proposed method to a new domain, transcription texts of speech conversation, with new applications of CCG parsing in mind. We create the CCG version of the Switchboard corpus (Godfrey et al., 1992), by which, as far as we are aware of, we conduct the first CCG parsing experiments on speech conversation.12 We obtain a new CCGbank of 59,029/3,799/7,681 sen11Due to many-to-many nature of mapping to GRs, the evaluation set contains relations not recoverable from the gold supertags using the provided script; for example, we find that from the annotated supertags of sentence How many battles did she win ?, the (amod battle many) relation is obtained instead of the gold det relation. This implies one of the difficulties to obtain further improvement on this set. 12Since the annotated part-of-speech tags are noisy, we automatically reannotate them using the core web sm model of spaCy (https://spacy.io/), version 2.0.16. a. we should cause it does help b. the only problem i see with term limitations is that i think that the bureaucracy in our government as is with most governments is just so complex that there is a learning curve and that you ca n’t just send someone off to washington and expect his first day to be an effective congress precision Table 5: Example sentences from the manually annotated subset of Switchboard test set. Error type # PP-attachment 3 Adverbs attaching wrong place 11 Predicate-argument 5 Imperative 2 Informal functional words 2 Others 11 Table 6: Error types observed in the manually annotated Switchboard subset data. tences for each of the train/test/development set, where the data split follows prior work on dependency parsing on this dataset (Honnibal and Johnson, 2014). In the conversion, we have to handle one of the characteristics of speech transcription texts, disfluencies. In real application, it is ideal to remove disfluencies such as interjection and repairs (e.g., I want a flight to Boston um to Denver), prior to performing CCG-based semantic composition. Since this corpus contains a layer of annotation that labels their occurrences, we perform constrained decoding to mark the gold disfluencies in a tree with a dummy category X, which can combine with any category from both sides (i.e., for all category C, C X ⇒C and X C ⇒C are allowed). In this work, we perform parsing experiments on texts that are clean of disfluencies, by removing X-marked words from sentences (i.e., a pipeline system setting with an oracle disfluency detection preprocessor).13 Another issue in conducting experiments on this dataset is evaluation. Since there exists no evaluation protocol for CCG parsing on speech texts, we evaluate the quality of output trees by two procedures; in the first experiment, we parse the entire test set, and convert them to constituency trees us13We regard developing joint disfluency detection and syntactic parsing method based on CCG as future work. 136 if ((S\NP)/(S\NP))/Sdcl CD N NP un = (Sdcl\NP)/NP 8 N NPun Sdcl\NP > Sdcl < and conj BE N NP un = (Sdcl\NP)/NP 2 N NPun Sdcl\NP > Sdcl < Sdcl\Sdcl Φ Sdcl < (S\NP)/(S\NP) > , , find (Sdcl\NP)/NP AE N NP un Sdcl\NP > Sdcl\NP rp . . Sdcl\NP rp Sdcl\NP > Figure 3: Parse output by the re-trained parser for sentence if CD = 8 and BE = 2, find AE. from math problems. Method Whole Subset P R F1 UF1 LF1 depccg 74.73 73.91 74.32 90.68 82.46 + ELMo 75.76 76.62 76.19 93.23 86.46 + Proposed 78.03 77.06 77.54 95.63 92.65 Table 7: Results on speech conversation texts (§5.4), on the whole test set and the manually annotated subset. Method UF1 LF1 depccg 88.49 66.15 + ELMo 89.32 70.74 + Proposed 95.83 80.53 Table 8: Results on math problems (§5.5). ing a method by Kummerfeld et al. (2012).14 We report labeled bracket F1 scores between the resulting trees and the gold trees in the true Switchboard corpus, using the EVALB script.15 However, the reported scores suffer from the compound effect of failures in CCG parsing as well as ones occurred in the conversion to the constituency trees. To evaluate the parsing performance in detail, the first author manually annotated a subset of randomly sampled 100 sentences from the test set. Sentences with less than four words are not contained, to exclude short phrases such as nodding. Using this test set, we report the standard CCG parsing metrics. Sentences from this domain exhibit other challenging aspects (Table 5), such as less formal expressions (e.g., use of cause instead of because) (5a), and lengthy sentences with many embedded phrases (5b).16 Results On the whole test set, depccg shows consistent improvements when combined with ELMo and the proposed method, in the constituency-based metrics (Whole columns in 14https://github.com/jkkummerfeld/ berkeley-ccg2pst 15https://nlp.cs.nyu.edu/evalb/ 16Following Honnibal and Johnson (2014), sentences in this data are fully lower-cased and contain no punctuation. Table 7). Though the entire scores are relatively lower, the result suggests that the proposed method is effective to this domain on the whole. By directly evaluating the parser’s performance in terms of predicate argument relations (Subset columns), we observe that it actually recovers the most of the dependencies, with the fine-tuned depccg achieving as high as 95.63% unlabeled F1 score. We further investigate error cases of the finetuned depccg in the subset dataset (Table 6). The tendency of error types is in accordance with other domains, with frequent errors in PPattachment and predicate-argument structure, and seemingly more cases of attachment errors of adverbial phrases (11 cases), which occur in lengthy sentences such as in Table 5b. Other types of error are failures to recognize that the sentence is in imperative form (2 cases), and ones in handling informal functional words such as cause (Table 5a). We conclude that the performance on this domain is as high as it is usable in application. Since the remaining errors are general ones, they will be solved by improving general parsing techniques. 5.5 Math Problems Setup Finally, we conduct another experiment on parsing math problems. Following previous work of constituency parsing on math problem (Joshi et al., 2018), we use the same train/test sets by Seo et al. (2015), consisting of 63/62 sentences respectively, and see if a CCG parser can be adapted with the small training samples. Again, the first author annotated both train/test sets, dependency trees on the train set, and CCG trees on the test set, respectively. In the annotation, we follow the manuals of the English CCGbank and the UD. We regard as an important future work extending the annotation to include fine-grained feature values in categories, e.g., marking a distinction between integers and real numbers (Matsuzaki et al., 2017). Figure 3 shows an example 137 CCG tree from this domain, successfully parsed by fine-tuned depccg. Results Table 8 shows the F1 scores of depccg in the respective settings. Remarkably, we observe huge additive performance improvement. While, in terms of labeled F1, ELMo contributes about 4 points on top of the plain depccg, adding the new training set (converted from dependency trees) improves more than 10 points.17 Examining the resulting trees, we observe that the huge gain is primarily involved with expressions unique to math. Figure 3 is one of such cases, which the plain depccg falsely analyzes as one huge NP phrase. However, after fine-tuning, it successfully produces the correct “If S1 and S2, S3” structure, recognizing that the equal sign is a predicate. 6 Conclusion In this work, we have proposed a domain adaptation method for CCG parsing, based on the automatic generation of new CCG treebanks from dependency resources. We have conducted experiments to verify the effectiveness of the proposed method on diverse domains: on top of existing benchmarks on biomedical texts and question sentences, we newly conduct parsing experiments on speech conversation and math problems. Remarkably, when applied to our domain adaptation method, the improvements in the latter two domains are significant, with the achievement of more than 5 points in the unlabeled metric. Acknowledgments We thank the three anonymous reviewers for their insightful comments. This work was in part supported by JSPS KAKENHI Grant Number JP18J12945, and also by JST AIP-PRISM Grant Number JPMJCR18Y1, Japan. References Lasha Abzianidze. 2017. LangPro: Natural Language Theorem Prover. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 115– 120. Association for Computational Linguistics. 17Note that, while in the experiment on this dataset in the previous constituency parsing work (Joshi et al., 2018), they evaluate on partially annotated (unlabeled) trees, we perform the “full” CCG parsing evaluation, employing the standard evaluation metrics. Given that, the improvement is even more significant. Bharat Ram Ambati, Tejaswini Deoskar, and Mark Steedman. 2013. Using CCG categories to improve Hindi dependency parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 604–609. Association for Computational Linguistics. Srinivas Bangalore and Aravind K. Joshi. 1999. Supertagging: An Approach to Almost Parsing. Computational Linguistics, 25(2):237–265. Johan Bos, Valerio Basile, Kilian Evang, Noortje J. Venhuizen, and Johannes Bjerva. 2017. The Groningen Meaning Bank. In Handbook of Linguistic Annotation, pages 463–496. Springer Netherlands. Johan Bos, Bosco Cristina, and Mazzei Alessandro. 2009. Converting a Dependency Treebank to a Categorial Grammar Treebank for Italian. In In Proceedings of the Eighth International Workshop on Treebanks and Linguistic Theories, pages 27–38. Stephen Clark and James R. Curran. 2007. WideCoverage Efficient Statistical Parsing with CCG and Log-Linear Models. Computational Linguistics, 33(4):493–552. Stephen Clark, Julia Hockenmaier, and Mark Steedman. 2002. Building Deep Dependency Structures with a Wide-coverage CCG Parser. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 327–334. Association for Computational Linguistics. Djork-Arn´e Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2016. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). ICLR. Timothy Dozat and Christopher D. Manning. 2017. Deep Biaffine Attention for Neural Dependency Parsing. ICLR. John J. Godfrey, Edward C. Holliman, and Jane McDaniel. 1992. SWITCHBOARD: Telephone Speech Corpus for Research and Development. In Proceedings of the 1992 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 517–520. IEEE Computer Society. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank. Computational Linguistics, 33(3):355–396. Matthew Honnibal and Mark Johnson. 2014. Joint Incremental Disfluency Detection and Dependency Parsing. Transactions of the Association for Computational Linguistics, 2:131–142. Xinzhou Jiang, Zhenghua Li, Bo Zhang, Min Zhang, Sheng Li, and Luo Si. 2018. Supervised Treebank Conversion: Data and Approaches. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 2706–2716. Association for Computational Linguistics. 138 Vidur Joshi, Matthew Peters, and Mark Hopkins. 2018. Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 1190–1199. Association for Computational Linguistics. John Judge, Aoife Cahill, and Josef van Genabith. 2006. QuestionBank: Creating a Corpus of ParseAnnotated Questions. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 497–504. Association for Computational Linguistics. Dan Klein and Christopher D. Manning. 2003. A* Parsing: Fast Exact Viterbi Parse Selection. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 40–47. Association for Computational Linguistics. Jonathan K. Kummerfeld, Dan Klein, and James R. Curran. 2012. Robust Conversion of CCG Derivations to Phrase Structure Trees. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 105–109. Association for Computational Linguistics. Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2016. Global Neural CCG Parsing with Optimality Guarantees. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2366–2376. Association for Computational Linguistics. Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-Rank Tensors for Scoring Dependency Structures. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1381–1391. Association for Computational Linguistics. Mike Lewis, Kenton Lee, and Luke Zettlemoyer. 2016. LSTM CCG Parsing. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 221–231. Association for Computational Linguistics. Mike Lewis and Mark Steedman. 2014a. A* CCG Parsing with a Supertag-factored Model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 990– 1000. Association for Computational Linguistics. Mike Lewis and Mark Steedman. 2014b. Improved CCG Parsing with Semi-supervised Supertagging. Transactions of the Association for Computational Linguistics, 2:327–338. Zhenghua Li, Min Zhang, Yue Zhang, Zhanyi Liu, Wenliang Chen, Hua Wu, and Haifeng Wang. 2016. Active Learning for Dependency Parsing with Partial Annotation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 344–354. Association for Computational Linguistics. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):314–330. M. Marneffe, B. Maccartney, and C. Manning. 2006. Generating Typed Dependency Parses from Phrase Structure Parses. In Proceedings of the Fifth International Conference on Language Resources and Evaluation, pages 449–454. European Language Resources Association. Pascual Mart´ınez-G´omez, Koji Mineshima, Yusuke Miyao, and Daisuke Bekki. 2017. On-demand Injection of Lexical Knowledge for Recognising Textual Entailment. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 710–720. Association for Computational Linguistics. Takuya Matsuzaki, Takumi Ito, Hidenao Iwane, Hirokazu Anai, and Noriko H. Arai. 2017. Semantic Parsing of Pre-university Math Problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2131– 2141. Association for Computational Linguistics. Seyed Abolghasem Mirroshandel and Alexis Nasr. 2011. Active Learning for Dependency Parsing Using Partially Annotated Sentences. In Proceedings of the 12th International Conference on Parsing Technologies, pages 140–149. Association for Computational Linguistics. Jeff Mitchell and Mark Steedman. 2015. Parser Adaptation to the Biomedical Domain without ReTraining. In Proceedings of the Sixth International Workshop on Health Text Mining and Information Analysis, pages 79–89. Association for Computational Linguistics. Makoto Miwa and Mohit Bansal. 2016. End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1105–1116. Association for Computational Linguistics. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A Multilingual Treebank Collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation, pages 1659–1666. European Language Resources Association. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke 139 Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,, pages 2227–2237. Association for Computational Linguistics. Laura Rimell and Stephen Clark. 2008. Adapting a Lexicalized-Grammar Parser to Contrasting Domains. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 475–484. Association for Computational Linguistics. Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. 2015. Solving geometry problems: Combining text and diagram interpretation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1466–1476. Association for Computational Linguistics. Miloˇs Stanojevi´c and Mark Steedman. 2019. CCG Parsing Algorithm with Incremental Tree Rotation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 228–239. Association for Computational Linguistics. Mark Steedman. 2000. The Syntactic Process. The MIT Press. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1556–1566. Association for Computational Linguistics. Yuka Tateisi, Akane Yakushiji, Tomoko Ohta, and Jun’ichi Tsujii. 2005. Syntax Annotation for the GENIA Corpus. In Companion Volume to the Proceedings of Conference including Posters/Demos and tutorial abstracts, pages 220–225. Association for Computational Linguistics. Masashi Yoshikawa, Hiroshi Noji, and Yuji Matsumoto. 2017. A* CCG Parsing with a Supertag and Dependency Factored Model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 277–287. Association for Computational Linguistics.
2019
13
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1351–1360 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1351 Exploiting Entity BIO Tag Embeddings and Multi-task Learning for Relation Extraction with Imbalanced Data Wei Ye1*†, Bo Li1,3*, Rui Xie1,2, Zhonghao Sheng1,2, Long Chen1,2 and Shikun Zhang1 1National Engineering Research Center for Software Engineering, Peking University 2School of Software and Microelectronics, Peking University 3Automation Dept, Beijing University of Posts and Telecommunications. [email protected], [email protected], {ruixie, zhonghao.sheng, clcmlxl, zhangsk}@pku.edu.cn Abstract In practical scenario, relation extraction needs to first identify entity pairs that have relation and then assign a correct relation class. However, the number of non-relation entity pairs in context (negative instances) usually far exceeds the others (positive instances), which negatively affects a model’s performance. To mitigate this problem, we propose a multitask architecture which jointly trains a model to perform relation identification with crossentropy loss and relation classification with ranking loss. Meanwhile, we observe that a sentence may have multiple entities and relation mentions, and the patterns in which the entities appear in a sentence may contain useful semantic information that can be utilized to distinguish between positive and negative instances. Thus we further incorporate the embeddings of character-wise/word-wise BIO tag from the named entity recognition task into character/word embeddings to enrich the input representation. Experiment results show that our proposed approach can significantly improve the performance of a baseline model with more than 10% absolute increase in F1-score, and outperform the state-of-theart models on ACE 2005 Chinese and English corpus. Moreover, BIO tag embeddings are particularly effective and can be used to improve other models as well. 1 Introduction Relation extraction, which aims to extract semantic relations from a given instance—entity pair and the corresponding text in context, is an important and challenging task in information extraction. It serves as a step stone for many downstream tasks such as question answering and knowledge graph construction. * indicates equal contribution. † Corresponding author. Traditionally, researchers mainly use either feature-based methods (Kambhatla, 2004; Boschee et al., 2005; GuoDong et al., 2005; Jiang and Zhai, 2007; Chan and Roth, 2010; Sun et al., 2011; Nguyen and Grishman, 2014) or kernelbased methods (Zelenko et al., 2003; Culotta and Sorensen, 2004; Bunescu and Mooney, 2005; Mooney and Bunescu, 2006; Zhang et al., 2006; Zhou et al., 2007; Giuliano et al., 2007; Qian et al., 2008; Nguyen et al., 2009; Sun and Han, 2014) for relation extraction, which tend to heavily rely on handcraft features and existing natural language processing (NLP) tools. Recently, deep learning models, including convolutional neural network (CNN) (Liu et al., 2013; Zeng et al., 2014; Nguyen and Grishman, 2015; Zeng et al., 2015; dos Santos et al., 2015; Lin et al., 2016) and recurrent neural network (RNN) (Miwa and Bansal, 2016; Zhou et al., 2016; She et al., 2018) w/o variants of attention mechanism have been widely applied to relation extraction and achieved remarkable success. The relation extraction task can be divided into two steps: determining which pair of entities in a given sentence has relation, and assigning a correct relation class to the identified entity pair. We define these two steps as two related tasks: Relation Identification and Relation Classification. If one only needs to categorize the given entities that are guaranteed to have some expected relation, then relation extraction is reduced to relation classification (Nguyen and Grishman, 2015). One variation of relation classification is the introduction of a new artificial relation class “Other.” If the number of non-relation entity pairs in context (negative instances) in the dataset is comparable to the number of entity pairs that have relation in context (positive instances), then the nonrelation pairs can be treated as having the relation class Other. 1352 Strictly speaking, most existing studies of relation extraction treat the task as relation classification. However, relation extraction often comes with an extremely imbalanced dataset where the number of non-relation entity pairs far exceeds the others, making it a more challenging yet more practical task than relation classification. For example, after filtering out those entity pairs whose entity type combination has never appeared in the Chinese corpus of ACE 2005, there are still more than 200,000 entity pairs left, in which the positive/negative instance ratio is about 1:20. In this paper, we focus on the relation extraction task with an imbalanced corpus, and adopt multi-task learning paradigm to mitigate the data imbalance problem. Only a few studies have considered the negative effect of having too many negative instances. Nguyen and Grishman (2015) proposed using CNN with filters of multiple window sizes. dos Santos et al. (2015) focused on learning the common features of the positive instances by computing only the scores of the relation classes excluding the class Other, and proposed using a pairwise ranking loss. We have also adopted these methods in our approach. For relation classification, the prediction error can be categorized into three types: 1) false negative—predicting a positive instance to be negative; 2) false positive—predicting a negative instance to be positive; 3) wrong relation class— predicting a positive instance to be positive yet assigning a wrong relation class. After training a baseline model to perform relation classification on the extremely imbalanced ACE 2005 Chinese corpus and dissecting its prediction errors, we find that the proportion of these three types of error are 30.20%, 62.80% and 7.00% respectively. It is conceivable that to improve a model’s performance on such corpus, it is best to focus on telling positive and negative instances apart. Since the negative instances may not have much in common, distinguishing between positive and negative instances is much more challenging than only classifying positive instances into a correct class. Moreover, the total number of positive instances combined is more comparable to the number of negative instances than positive instances of any individual relation class alone. Based on these rationales, we propose to jointly train a model to do another binary classification task—relation identification—alongside relation classification to mitigate the data imbalance problem. Another facet that most existing studies fail to consider is that there may be multiple relation mentions in a given sentence if it contains multiple entities. In the Chinese corpus of ACE 2005, there are 4.9 entities and 1.34 relation mentions in a sentence on average. The patterns in which these entities appear in the sentence can provide useful semantic information to distinguish between positive and negative instances. Therefore, we exploit the character-wise/word-wise BIO (Beginning, Inside, Outside) tag used in the named entity recognition (NER) task to enrich the input representation. The details of our approach will be presented in Section 2. We conducted extensive experiments on ACE 2005 Chinese and English corpus. Results show that both the novel multi-task architecture and the incorporation of BIO tag embeddings can improve the performance, and the model equipped with both achieves the highest F1-score, significantly outperforming the state-of-the-art models. Analysis of the results indicates that our proposed approach can successfully address the problem of having a large number of negative instances. To summarize, we make the following contributions in this paper: 1. We propose a multi-task architecture which jointly trains a model to perform relation identification with cross-entropy loss and relation classification task with ranking loss, which can successfully mitigate the negative effect of having too many negative instances. 2. We incorporate the embeddings of characterwise/word-wise BIO tag from NER task to enrich the input representation, which proves to be very effective not only for our model but for other models as well. We argue that BIO tag embeddings could be a general part of character/word representation, just like the entity position embeddings (Zeng et al., 2014) that many researchers would use in recent years. 2 Proposed Approach We have designed a novel multi-task architecture which combines two related tasks: 1) relation identification, which is a binary classification problem to determine whether a given entity pair 1353 has relation; 2) relation classification, which is a multiple classification problem to determine the relation class. Figure 1 shows the overall architecture. Figure 1: The overall multi-task architecture. To demonstrate, there are three window sizes for filters in the convolutional layer, as denoted by the three-layer stack; for each window size there are four filters, as denoted by the number of rows in each layer. Maxpooling is applied to each row in each layer of the stack, and the dimension of the output is equal to the total number of filters. Three are three main parts in the architecture: • Input Layer Given an input sentence x of n words1 {x1, x2, ..., xn} with m entities {e1, e2, ..., em} where ei ∈x, and two target entities et1, et2 ∈{e1, e2, ..., em}, the input layer transforms the sentence into a matrix X, which includes word embeddings, position embeddings and BIO tag embeddings of each word. • Convolutional Layer with Max-pooling Following the input layer is a convolutional layer that extracts high-level features, with filters (convolution kernels) of multiple window sizes (Nguyen and Grishman, 2015). Then max-pooling is applied to each feature map to reduce dimensionality. • Multi-task Layer In the multi-task layer, the model jointly learns the relation identification 1We use character-wise model for Chinese corpus and word-wise model for English corpus. For simplicity sake, we use “word” to denote either an English word or a Chinese character to present our model. task using cross-entropy loss and the relation classification task using ranking loss. 2.1 Input Layer • Word Embeddings We use word embeddings with random initialization for each word in the input sentence. The dimension of word embeddings is dw. • Position Embeddings We also employ position embeddings to encode the relative distance between each word and the two target entities in the sentence. We believe that more useful information regarding the relation is hidden in the words closer to the target entities. The dimension of position embeddings is dp. • BIO Tag Embeddings Since an input sentence often contains more than two entities, we utilize the BIO tag information of entities to enrich the representation of the input. More specifically, for each word in the input sentence, if the word is part of an entity, we use the entity type T to label the start of the entity as BT , and label the rest of the entity as BI. If the word is not part of an entity, then we label the word as O. The dimension of BIO tag embeddings is dt. After concatenating all three embeddings together for each word, we transform a sentence into a matrix X = [w1, w2, ..., wn] as the input representation, where the column vector wi ∈ Rdw+2∗dp+dt. Figure 2 illustrates how to derive position embeddings and BIO tag embeddings. 2.2 Convolutional Layer with Multi-Sized Window Kernels Next, the matrix X is fed into the convolutional layer to extract high-level features. A filter with window size k can be denoted as F = [f1, f2, .., fk], where the column vector fi ∈ Rdw+2∗dp+dt. Apply the convolution operation on the two matrices X and F , and we get a score sequence T = {t1, t2, ..., tn−k+1}: ti = g( k−1 X j=0 f T j+1wj+i + b) (1) where g is a non-linear function and b is bias. In our experiments, we apply zero-paddings during the convolution operation, so that the score 1354 Figure 2: Illustration of BIO tag information and positional information for a given instance. In this example, there are five entities in the input sentence, and the target entities are the second and the third. sequence has the same length as the input sequence, which is n, instead of n −k + 1 if we apply Equation 1 which assumes no padding. There are multiple filters with different window sizes in the convolutional layer. Then max-pooling is applied to the outputted feature map of each filter. Eventually the input sentence x is represented as a column vector r with a dimension that is equal to the total number of filters. 2.3 Multi-Task Layer • Relation Identification with Cross-entropy Loss For the binary classification task of relation identification we use cross-entropy loss. Positive instances are labelled “1” and negative instances “0.” If p is the one-hot true distribution over all classes C = {c} and q is the distribution a model predicts, then the cross-entropy loss of a given instance can be defined as follows: H(p, q) = − X c∈C p(c)log(q(c)) (2) So the loss of this task can be defined as: loss1 = − X (p(1)log(q(1))+p(0)log(q(0))) (3) • Relation Classification with Ranking Loss For the multiple classification task of relation classification, we use the pairwise ranking loss proposed by (dos Santos et al., 2015). Given the sentence representation r, the score for class c is computed as: sc = rT [W classes]c (4) where W classes is a matrix to be learned, whose number of columns is equal to the number of classes. W classes c is a column vector corresponding to class c, whose dimension is equal to that of r. For each instance, the input sentence x has a correct class label y+ and incorrect ones y−. Let sy+ and sy−be the scores for y+ and y−respectively, then the ranking loss can be computed by the following two equations: L+ = log(1 + exp(γ(m+ −sy+))) (5) L−= log(1 + exp(γ(m−+ sy−))) (6) where m+ and m−are margins and γ is a scaling factor. L+ decreases as the score sy+ increases, and is close to zero when sy+ > m+, which encourages the network to give a score greater than m+ for the correct class. Similarly, L−decreases as the score sy−decreases, and is close to zero when sy−< −m−, which encourages the network to give scores smaller than −m−for incorrect classes. For the class Other, only L−is calculated to penalize the incorrect prediction. And following (dos Santos et al., 2015), we only choose the class with the highest score among all incorrect classes as the one to perform a training step. Then we optimize the pairwise ranking loss function: loss2 = X (L+ + L−) (7) The total loss function for multi-task training is: L = α · loss1 + β · loss2 (8) where α and β are weights of the two losses. In our experiments, we find that α = β yields the best result. 2.4 Prediction We only use the class score sc in the multiple classification task to make predictions, while the binary classification task is only used for optimizing the network parameters. 1355 Given an instance, the prediction P is made by: P = ( arg max c (sc) max(sc) ≥θ Other max(sc) < θ (9) where θ is a threshold. The relation in an instance is predicted as the class Other if the score sc is less than θ for every class c. Otherwise, we choose the class with the highest score as the prediction. 3 Experiments and Results 3.1 Data Preparation We use both the Chinese and English corpus of ACE 2005 to evaluate our proposed approach. Only positive instances have been annotated in the dataset. To extract negative instances, we need to enumerate every entity pair in a sentence. We consider two approaches: one considers the direction of relation while the other does not. For the first approach, we extract only one instance for any pair of entities e1, e2 in a sentence x regardless of direction. Those instances that have been annotated, regardless of direction, are positive instances, and the rest are negative instances. A trained model only needs to determine whether an entity pair has relation. For the second approach, we extract two instances for any pair of entities in a sentence, with the two entities in different orders. Since at most one of the two instances has been annotated to be positive instances, we treat the other one and those neither of which are annotated to be negative instances. A trained model will additionally need to identify head entity and tail entity in a relation, which is considerably harder. After extracting negative instances, we further filtered out those instances whose entity type combination has never appeared in a relation mention. Then we added the remaining negative instances to the positive instances to complete data preparation. We adopted the first approach to extract negative instances from the English corpus of ACE 2005, and ended up with 71,895 total instances after filtering, among which 64,989 are negative instances. The positive/negative instance ratio is about 1:9.4. We adopted the second approach to extract negative instances from the Chinese corpus of ACE 2005, and ended up with 215,117 total instances after filtering, among which 205,800 of them are negative instances. The positive/negative instance ratio is about 1:20. 3.2 Experiment Settings 3.2.1 Embeddings In our approach, we use three kinds of embeddings, namely word embeddings, position embeddings and BIO tag embeddings. They are all randomly initialized, and are adjusted during training. The dimensions of these three embeddings are 200, 50 and 50 respectively. 3.2.2 Hyper-parameters The number of filters in the convolutional layer is 64, and the window size of filters ranges from 4 to 10. The fully connected layer to calculate class scores has 128 hidden units with a dropout rate of 0.2. The batch size is 256. The neural networks are trained using the RMSprop optimizer with the learning rate α set to 0.001. As for the parameters in the pairwise ranking loss, for the English corpus, we set m+ to 2.5, m− to 0.5, γ to 2 and θ to 0; for the Chinese corpus, we set m+ to 4.5, m−to -0.5, γ to 2 and θ to 1. The cross-entropy loss and the pairwise ranking loss in multi-task learning are equally weighted. 3.3 Experiment Results We use five-fold cross-validation to reduce the randomness in the experiment results. The precision (P), recall (R) and F1-score (F1) of the positive instances are used as evaluation metrics. We compare several variants of our proposed models with the state-of-the-art models on the English and Chinese corpus of ACE 2005 respectively. Variants of our models are: • Baseline: a model that uses CNN with filters of multiple window sizes and only performs the relation classification task using the pairwise ranking loss. The baseline model is motivated by dos Santos et al. (2015) and Nguyen and Grishman (2015). • Baseline+Tag: baseline model with BIO tag embeddings. • Baseline+MTL: baseline model that performs relation identification using crossentropy loss in addition to relation classification. 1356 • Baseline+MTL+Tag, baseline model that adopts both multi-tasking learning and BIO tag embeddings. For the English corpus, we choose SPTree (Miwa and Bansal, 2016) and Walk-based Model (Christopoulou et al., 2018) for comparison. Since the data preparation is similar, we directly report the results from the original papers. The experiment results are summarized in Table 1. For the Chinese corpus, we choose PCNN (Zeng et al., 2015) and Eatt-BiGRU (Qin et al., 2017) for comparison. We re-implemented these two models, and the experiment results are summarized in Table 2. Model P% R% F1% SPTree 70.1 61.2 65.3 Walk-based 69.7 59.5 64.2 Baseline 58.8 57.3 57.2 Baseline+Tag 61.3 76.7 67.4 Baseline+MTL 63.8 56.1 59.5 Baseline+MTL+Tag 66.5 71.8 68.9 Table 1: Comparison between our model and the stateof-the-art models using ACE 2005 English corpus. F1scores higher than the state-of-the-art are in bold. Model P% R% F1% PCNN 54.4 42.1 46.1 Eatt-BiGRU 57.8 49.7 52.0 Baseline 48.5 57.1 51.7 Baseline+Tag 61.8 62.7 61.4 Baseline+MTL 56.7 52.9 53.8 Baseline+MTL+Tag 61.3 65.8 62.9 Table 2: Comparison between our model and the stateof-the-art models using ACE 2005 Chinese corpus. F1scores higher than the state-of-the-art are in bold. From Table 1 and Table 2, we can see: 1. Both BIO tag embeddings and multi-task learning can improve the performance of the baseline model. 2. Baseline+Tag can outperform the state-ofthe-art models on both the Chinese and English corpus. Compared to the baseline model, BIO tag embeddings lead to an absolute increase of about 10% in F1-score, which indicates that BIO tag embeddings are very effective. 3. Multi-task learning can yield further improvement in addition to BIO tag embeddings: Baseline+MTL+Tag achieves the highest F1-score on both corpora. 3.4 Analysis 3.4.1 Effectiveness of BIO Tag Embeddings To further investigate the effectiveness of BIO tag embeddings, we incorporated these embeddings into PCNN (Zeng et al., 2015) and EattBiGRU (Qin et al., 2017) to form two new models: PCNN+Tag and East-BiGRU+Tag, and evaluated their performance using the Chinese corpus of ACE 2005. The results are summarized in Table 3. Model P% R% F1% PCNN+Tag 74.3 50.4 58.2 Eatt-BiGRU+Tag 67.8 56.4 61.1 Table 3: Evaluation of state-of-the-art models with BIO Tag embeddings using ACE 2005 Chinese corpus. Compare Table 3 with Table 2, and we can see that thanks to BIO tag embeddings, the F1-score of PCNN increases from 46.1% to 58.2%, while the F1-score of Eatt-BiGRU increases from 52.0% to 61.1%. Such significant improvement is consistent with that on the baseline model and further attests to the effectiveness of BIO tag embeddings. We believe that BIO tag embeddings could be used as a general part of character/word representation for other models and potentially other tasks as well. 3.4.2 Effect of Positive/Negative Instance Ratio To see how our approach would perform as the degree of data imbalance varies, we used the same random seed to sample negative instances extracted from the Chinese corpus of ACE 2005 to add to the positive instances with different negative/positive instance ratios of 1:0.5, 1:1, 1:5, 1:10 and 1:15. Then we trained and evaluated two models: Baseline and Baseline+MTL+Tag. The results are shown in Figure 3. As shown in Figure 3, the performance drops for both models in terms of F1-score as the positive/negative instance ratio decreases. Yet, as the data become more imbalanced, the gap between the performances of Baseline+MTL+Tag and Baseline widens. This indicates that our proposed approach is more useful when the data is 1357 Model RI Loss Function in RC P% R% F1% Baseline+Tag × Ranking Loss 61.8 62.7 61.4 Baseline+Tag × Cross-entropy Loss 67.7 57.8 61.5 Baseline+Tag × Cross-entropy Loss + Ranking Loss 63.2 62.1 61.7 Baseline+MTL+Tag ✓ Ranking Loss 61.3 65.8 62.9 Baseline+MTL+Tag ✓ Cross-entropy Loss 61.6 62.0 62.0 Table 4: Evaluating the effect of the loss function used in relation classification w/o multi-tasking using ACE 2005 Chinese corpus. RC stands for relation classification and RI stands for relation identification. Figure 3: Effect of positive/negative instance ratio on F1-score. more imbalanced, though it performs better than the baseline regardless of the positive/negative instance ratio. 3.4.3 Effect of Loss Function w/o Multi-tasking Recall that in the multi-task architecture that we have proposed, we use the pairwise ranking loss for the multiple classification task of relation classification and use cross-entropy loss for the binary classification task of relation identification. We can, however, use cross-entropy in relation classification as well. To see how the choice of loss function affects performance in different scenarios, we switched ranking loss to cross-entropy loss or simply added cross-entropy loss in the relation classification task, and evaluated the Baseline+Tag model w/o multi-task learning, using the Chinese corpus of ACE 2005. The results are summarized in Table 4, from which we can see: 1. When doing a single task of relation classification, the model has higher precision and lower recall with cross-entropy loss, but has lower precision and higher recall with ranking loss; the F1-scores do not differ much. This suggests that for doing relation classification only, the choice of loss function seems not to matter too much. 2. Multi-task learning helps, regardless of the loss function used in relation classification. 3. When we use cross-entropy loss and ranking loss at the same time for relation classification, without multi-tasking, the F1score only increases slightly from 61.4% to 61.7%. But when cross-entropy is applied to another related task—relation identification, with multi-tasking, the F1-score increases from 61.4% to 62.9% with an absolute increase of 1.5%. This suggests that the effectiveness of our multi-task architecture mostly comes from the introduction of relation identification, and this binary classification task does help with the data imbalance problem, corroborating our motivation stated in Section 1. 4. In the same multi-tasking scenario, using ranking loss in relation classification is better than using cross-entropy loss (62.9% v.s. 62.0%), with an absolute increase of 0.9% in F1-score. Note that cross-entropy loss is already used in relation identification. This suggests that the diversity that comes with ranking loss can improve performance. 4 Related work Liu et al. (2013) were the first to adopt deep learning for relation extraction. They proposed to use a CNN to learn features automatically without using handcraft features. Zeng et al. (2014) also employed CNN to encode the sentence, using additional lexical features to word embeddings. Their biggest contribution is the introduction of position 1358 embeddings. Zeng et al. (2015) proposed a model named Piecewise Convolutional Neural Networks (PCNN) in which each convolutional filter pi is divided into three segments (pi1, pi2, pi3) by head and tail entities, and the max-pooling operation is applied to these three segments separately. dos Santos et al. (2015) also used CNN but proposed a new pairwise ranking loss function to reduce the impact of negative instances. Lin et al. (2016) used CNN with a sentence-level attention mechanism over multiple instances to reduce noise in labels. RNN is also widely used in relation extraction. Miwa and Bansal (2016) used LSTM and tree structures for relation extraction task. Their model is composed of three parts: an embedding layer to encode the input sentence, a sequence layer to identify whether a word is an entity or not, and a dependency layer for relation extraction. Zhou et al. (2016) used BiLSTM and attention mechanism to improve the model’s performance. She et al. (2018) proposed a novel Hierarchical attention-based Bidirectional Gated recurrent neural network (HBGD) integrated with entity descriptions to mitigate the problem of having wrong labels and enable the model to capture the most important semantic information. Entity background knowledge also contains important information for relation extraction. To capture such information, Ji et al. (2017) and She et al. (2018) extracted entity descriptions from Freebase and Wikipedia and used an encoder to extract features from these descriptions. He et al. (2018) used a dependency tree to represent the context of entities and transformed the tree into entity context embedding using tree-based GRU. Unlike most existing works which only consider a single entity pair in a sentence, Christopoulou et al. (2018) considered multiple entity pairs in a sentence simultaneously and proposed a novel walk-based model to capture the interaction pattern among the entity pairs. Su et al. (2018) pointed out that the global statistics of relations between entity pairs are also useful, and proposed to construct a relation graph and learn relation embeddings to improve the performance of relation extraction. Several studies are motivated to mitigate the effect of wrong labels (Lin et al., 2016; She et al., 2018; Qin et al., 2018), and Li and Ji (2014) proposed to jointly extract entity mentions and relations. This is not the focus of our paper. 5 Conclusion In this paper, we focus on the relation extraction task with an imbalanced corpus. To mitigate the problem of having too many negative instances, we propose a multi-task architecture which jointly trains a model to perform the relation identification task with cross-entropy loss and the relation classification task with ranking loss. Moreover, we introduce the embeddings of characterwise/word-wise BIO tag from the named entity recognition task to enrich the input representation. Experiment results on ACE 2005 Chinese and English corpus show that our proposed approach can successfully address the data imbalance problem and significantly improve the performance, outperforming the state-of-the-art models in terms of F1-score. Particularly, we find BIO tag embeddings very effective, which we believe could be used as a general part of character/word representation. Acknowledgments We would like to thank Handan Institute of Innovation, Peking University for their support of our work. References Elizabeth Boschee, Ralph Weischedel, and Alex Zamanian. 2005. Automatic information extraction. In Proceedings of the International Conference on Intelligence Analysis, volume 71. Razvan C Bunescu and Raymond J Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proceedings of the conference on human language technology and empirical methods in natural language processing, pages 724–731. Yee Seng Chan and Dan Roth. 2010. Exploiting background knowledge for relation extraction. In COLING 2010, 23rd International Conference on Computational Linguistics, Proceedings of the Conference, 23-27 August 2010, Beijing, China, pages 152–160. Fenia Christopoulou, Makoto Miwa, and Sophia Ananiadou. 2018. A walk-based model on entity graphs for relation extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 81–88. Aron Culotta and Jeffrey S. Sorensen. 2004. Dependency tree kernels for relation extraction. In Proceedings of the 42nd Annual Meeting of the Asso1359 ciation for Computational Linguistics, 21-26 July, 2004, Barcelona, Spain., pages 423–429. Claudio Giuliano, Alberto Lavelli, Daniele Pighin, and Lorenza Romano. 2007. Fbk-irst: Kernel methods for semantic relation extraction. In Proceedings of the 4th International Workshop on Semantic Evaluations, pages 141–144. Zhou GuoDong, Su Jian, Zhang Jie, and Zhang Min. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 427–434. Zhengqiu He, Wenliang Chen, Zhenghua Li, Meishan Zhang, Wei Zhang, and Min Zhang. 2018. See: Syntax-aware entity embedding for neural relation extraction. In Thirty-Second AAAI Conference on Artificial Intelligence. Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2017. Distant supervision for relation extraction with sentence-level attention and entity descriptions. In Thirty-First AAAI Conference on Artificial Intelligence. Jing Jiang and ChengXiang Zhai. 2007. A systematic exploration of the feature space for relation extraction. In Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, April 2227, 2007, Rochester, New York, USA, pages 113– 120. Nanda Kambhatla. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations. In Proceedings of the ACL 2004 on Interactive poster and demonstration sessions, page 22. Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 402–412. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2124–2133. ChunYang Liu, WenBo Sun, WenHan Chao, and Wanxiang Che. 2013. Convolution neural network for relation extraction. In International Conference on Advanced Data Mining and Applications, pages 231–242. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Raymond J Mooney and Razvan C Bunescu. 2006. Subsequence kernels for relation extraction. In Advances in neural information processing systems, pages 171–178. Thien Huu Nguyen and Ralph Grishman. 2014. Employing word representations and regularization for domain adaptation of relation extraction. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 2: Short Papers, pages 68–74. Thien Huu Nguyen and Ralph Grishman. 2015. Relation extraction: Perspective from convolutional neural networks. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 39–48. Truc-Vien T. Nguyen, Alessandro Moschitti, and Giuseppe Riccardi. 2009. Convolution kernels on constituent, dependency and sequential structures for relation extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, EMNLP 2009, 6-7 August 2009, Singapore, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1378–1387. Longhua Qian, Guodong Zhou, Fang Kong, Qiaoming Zhu, and Peide Qian. 2008. Exploiting constituent dependencies for tree kernel-based semantic relation extraction. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 697–704. Pengda Qin, Weiran Xu, and Jun Guo. 2017. Designing an adaptive attention mechanism for relation classification. In 2017 International Joint Conference on Neural Networks, IJCNN 2017, Anchorage, AK, USA, May 14-19, 2017, pages 4356–4362. Pengda Qin, Weiran Xu, and William Yang Wang. 2018. DSGAN: generative adversarial training for distant supervision relation extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 496–505. C´ıcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 626–634. Heng She, Bin Wu, Bai Wang, and Renjun Chi. 2018. Distant supervision for relation extraction with hierarchical attention and entity descriptions. In 2018 International Joint Conference on Neural Networks (IJCNN), pages 1–8. 1360 Yu Su, Honglei Liu, Semih Yavuz anda Izzeddin Gur, Huan Sun, and Xifeng Yan. 2018. Global relation embedding for relation extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 820–830. Ang Sun, Ralph Grishman, and Satoshi Sekine. 2011. Semi-supervised relation extraction with large-scale word clustering. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA, pages 521–529. Le Sun and Xianpei Han. 2014. A feature-enriched tree kernel for relation extraction. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 2: Short Papers, pages 61–67. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Journal of Machine Learning Research, 3:1083–1106. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1753– 1762. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, August 23-29, 2014, Dublin, Ireland, pages 2335–2344. Min Zhang, Jie Zhang, and Jian Su. 2006. Exploring syntactic features for relation extraction using a convolution tree kernel. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 288– 295. Guodong Zhou, Min Zhang, DongHong Ji, and Qiaoming Zhu. 2007. Tree kernel-based relation extraction with context-sensitive structured parse tree information. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attentionbased bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 207–212.
2019
130
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1361–1370 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1361 Joint Type Inference on Entities and Relations via Graph Convolutional Networks Changzhi Sun 1, ∗, Yeyun Gong2, Yuanbin Wu1, 3, Ming Gong2, Daxing Jiang2, Man Lan1, Shiliang Sun1, and Nan Duan2 1Department of Computer Science and Technology, East China Normal University 2Microsoft Research Asia 3State Key Laboratory of Cognitive Intelligence, iFLYTEK {changzhisun}@stu.ecnu.edu.cn {ybwu,mlan,slsun}@cs.ecnu.edu.cn {yegong, nanduan, migon, djiang}@microsoft.com Abstract We develop a new paradigm for the task of joint entity relation extraction. It first identifies entity spans, then performs a joint inference on entity types and relation types. To tackle the joint type inference task, we propose a novel graph convolutional network (GCN) running on an entity-relation bipartite graph. By introducing a binary relation classification task, we are able to utilize the structure of entity-relation bipartite graph in a more efficient and interpretable way. Experiments on ACE05 show that our model outperforms existing joint models in entity performance and is competitive with the state-of-the-art in relation performance. 1 Introduction Extracting entities and relations from plain texts is an important and challenging task in natural language processing. Given a sentence, the task aims to detect text spans with specific types (entities) and semantic relations among those text spans (relations). For example, in the Figure 1, “Toefting” is a person entity (PER), “teammates” is a person entity (PER), and the two entities have a PersonSocial relation (PER-SOC). To tackle the task of entity relation extraction, various methods have been proposed, which can be divided into two categories: pipeline models and joint models. Pipeline models extract entities and relations in two stages: entities are first extracted by an entity model, and then these extracted entities are used as the inputs of a relation model. Pipeline models often ignore interactions between the two models and they suffer from error propagation. Joint models integrate information between entities and relations into a single model with the joint training, and have achieved better ∗Work done while this author was an intern at Microsoft Research Asia. Toefting was convicted of assaulting a pair of wokers during a night out with national squad teammates in the capital … PER PER GPE Toefting teammates capital PER-SOC PHYS PHYS Toefting teammates capital (Toefting, teammates) (Toefting, capital) (teammates, capital) PER PER GPE PER-SOC-RIGHT PHYS-RIGHT PHYS-RIGHT Entity Types Entity Nodes Relation Nodes Relation Types Entity-Relation Graph Figure 1: An example from ACE05. The first part contains annotations and the second part is the entityrelation graph of the sentence used in GCN. results than the pipeline models. In this paper, we focus on joint models. More and more joint methods have been applied to this task. Among them, Miwa and Bansal (2016); Katiyar and Cardie (2017) identify the entity with a sequence labelling model, and identify the relation type with a multi-class classifier. These joint methods do joint learning through sharing parameters and they have no explicit interaction in type inference. In addition, some complex joint decoding algorithms (e.g., simultaneously decoding entities and relations in beam search) have been carefully investigated, including Li and Ji (2014); Zhang et al. (2017); Zheng et al. (2017); Wang et al. (2018). They jointly handle span detection and type inference to achieve more interactions. By inspecting the performance of existing models (Sun et al., 2018) on ACE05, we find that, for many entities, their spans are correctly identified, but their entity types are wrong. In particular, the F1 of extracting typed entities is about 83%, while the F1 of extracting entity spans is about 90%. Thus, if we have a better type inference model, we 1362 may get a better joint extraction performance. At the same time, we observe that a joint inference on entity and relation types could be potentially better than predicting them independently. For example, in Figure 1, the PER-SOC relation suggests that the type of “Toefting” might be PER, and vice versa. Moreover the PER (“Toefting”) and the relation PER-SOC could benefit from other relations such as PHYS. In this paper, we define joint entity relation extraction into two sub-tasks: entity span detection and entity relation type deduction. For entity span detection, we treat it as a sequence labeling problem. For joint type inference, we propose a novel and concise joint model based on graph convolutional networks (GCNs) (Kipf and Welling, 2017). The two sub-models are trained jointly. Specifically, given all detected entity spans in a sentence, we define an entity-relation bipartite graph. For each entity span, we assign an entity node. For each entity-entity pair, we assign a relation node. Edges connect relation nodes and their entity nodes (last part of Figure 1). With efficient graph convolution operations, we can learn representations for entity nodes and relation nodes by recursively aggregating information from their neighborhood over the bipartite graph. It helps us to concisely capture information among entities and relations. For example, in Figure 1, to predict the PER (“Toefting”), our joint model can pool the information of PER-SOC, PHYS, PER (“teammates”) and GPE (captital). To further utilize the structure of the graph, we also propose assigning different weights on graph edges. In particular, we introduce a binary relation classification task, which is to determine whether the two entities form a valid relation. Different from previous GCN-based models (Shang et al., 2018; Zhang et al., 2018), the adjacency matrix of graph is based on the output of binary relation classification, which makes the proposed adjacency matrix more explanatory. To summarize, the main contributions of this work are 1 • We present a novel and concise joint model to handle the joint type inference problem based on graph convolutional network (GCN). • We introduce a binary relation classification task to explore the structure of entity-relation 1 Our implementation is available at https:// github.com/changzhisun/AntNRE. bipartite graph in a more efficient and interpretable way. • We show that the proposed joint model on ACE05 achieves best entity performance, and is competitive with the state-of-the-art in relation performance. 2 Background of GCN In this section, we briefly describe graph convolutional networks (GCNs). Given a graph with n nodes, the goal of GCNs is to learn structureaware node representations on the graph which takes as inputs: • an n×d input node embedding matrix H, where n is the number of nodes and d is the dimension of input node embedding; • an n × n matrix representation of the graph structure such as the adjacency matrix A (or some function thereof) 2. In an L-layer GCNs, every layer can be written as a non-linear function H(l+1) = σ( ˆAH(l)W(l)) (1) with H(0) = H, where ˆA = D−1 2 AD−1 2 is the normalized symmetric adjacency matrix and W(l) is a parameter matrix for the l-th GCN layer. D is the diagonal node degree matrix, where Dii = P j Aij. σ is a non-linear activation function like ReLU. Finally, we can obtain a node-level output Z = H(L), which is an n × d feature matrix. 3 Approach We define the joint entity relation extraction task. Given a sentence s = w1, . . . w|s| (wi is a word), the task is to extract a set of entity spans E with specific types and a set of relations R. An entity span e ∈E is a sequence of words labeling with an entity type y (e.g., person (PER), organization (ORG)). A relation r is a quintet (e1, y1, e2, y2, l), where e1 and e2 are two entity spans with specific types y1 and y2. l is a relation type describing the semantic relation between two entities. (e.g., organization affiliation relation (ORG-AFF)). Let Te, Tr be the set of possible entity types and relation types respectively. 2In order to incorporate self-information, we add a selfloop to each node, where Aii = 1.0 for each node i. 1363 In this work, we decompose the joint entity relation extraction task into two parts, namely, entity span detection and entity relation type deduction. We first treat entity span detection as a sequence labelling task (Section 3.1), and then construct an entity-relation bipartite graph (Section 3.2) to perform joint type inference on entity nodes and relation nodes (Section 3.3). All submodels share parameters and are trained jointly. Different from existing joint learning algorithms (Sun et al., 2018; Zhang et al., 2017; Katiyar and Cardie, 2017; Miwa and Bansal, 2016), we propose a concise joint model to perform joint type inference on entities and relations based on GCNs. It considers interactions among multiple entity types and relation types simultaneously in a sentence. 3.1 Entity Span Detection To extract entity spans from a sentence (Figure 2), we adopt the BILOU sequence tagging scheme: B, I, L and O denote the begin, inside, last and outside of a target span, U denotes a single word span. For example, for a person (PER) entity “Patrick McDowell”, we assign B to “Patrick” and L to “McDowell”. Given an input sentence s, we use a bidirectional long short term memory (biLSTM) network (Hochreiter and Schmidhuber, 1997) with parameter θseq to incorporate information from both forward and backward directions of s. hi = biLSTM(xi; θseq), (2) where hi is the concatenation of a forward and a backward LSTM’s hidden states at position i, and xi is the word representation of wi which contains pre-trained embeddings and character-based word representations generated by running a CNN on the character sequences of wi. Then, we employ a softmax output layer to predict wi’s tag ˆti, P(ˆti|s) = Softmax(Wspanhi), where Wspan is the parameter. Given an input sentence s and its gold tag sequence t = t1, . . . , t|s|, the training objective is to minimize 3 Lspan = −1 |s| |s| X i=1 log P(ˆti = ti|s). (3) 3We have also tried biLSTM-CRF (Huang et al., 2015) as an advanced sequence labelling model, but performances are nearly the same in our experiments. biLSTM 𝒙𝟏𝒙𝟐𝒙𝟑𝒙𝟒𝒙𝟓 𝒙𝟔𝒙𝟕𝒙𝟖 Softmax B L O B L O U O 𝑒1 𝑒2 𝑒3 Figure 2: The biLSTM model for entity span detection. 3.2 Entity-Relation Bipartite Graph Given a set of detected entity spans ˆE (obtained from the entity span tag sequence ˆt), we consider all entity span pairs in ˆE as candidate relations 4. Then we build a heterogeneous undirected bipartite graph Gs which contains entity nodes and relation nodes in a sentence s. In the graph Gs, interactions on multiple entity types and relation types can be explicitly modeled. The number of nodes n in the graph Gs is the number of entity spans | ˆE| plus the number of all candidate relations | ˆE|(| ˆE|−1) 2 . We have an initial input node embedding matrix H. For a relation r12 and its two entities e1, e2, we use Hr12 to denote relation embedding of r12, and use He1,He2 to denote entity embedding of e1, e2 respectively. Next, we build edges between entity nodes and relation nodes. For graph edges, we connect every relation node to its two entity nodes instead of directly connecting any entity (relation) nodes. Thus we focus on the bipartite graph. The reasons are two folds. a) We do not think that all the remaining entities in the sentence are helpful. Relation nodes are bridges between entity nodes and vice versa. b) GCN is not suitable for fully-connected graphs because GCN reduce to rather trivial operations on fully-connected graphs. It means that, for an entity node e, the only way to observe other entities is through relations which e takes part in. For example, given a relation node r12 and its two entity nodes e1, e2, we add two edges. One is the edge between e1 and r12, and another is the edge 4The first entity span is always on the left side of the second entity span of each candidate relation, and we use in total 2Tr + 1 relation types in order to consider both directions. The additional type is the None which means no relation between entity span pair. 1364 𝑒1 𝑒2 𝑒3 𝑟12 𝑟13 𝑟23 GCN Softmax 1 1 1 Softmax Softmax . . . Activation Function & Dropout GCN Layer Binary Relation Entity Type Relation Type Node Embeding Extractor Entity Span Detection 𝑦1 𝑦2 𝑦3 𝑙1 𝑙2 𝑙3 Figure 3: Our network structure for the joint entity and relation extraction based on GCN. The node embedding extractor computes He and Hr. between e2 and r12. We refer to it as static graph. In order to further utilize the structure of the graph (some kind of prior knowledge) instead of using a static graph, we also investigate the dynamic graph for pruning redundant edges. A key intuition is that if two entities hold a relation, we could add two edges between the relation node and two entity nodes. Conversely, if two entities have no relation, we keep two entity nodes and the relation node separately. To this end, we introduce the binary relation classification task. It aims to predict whether a certain relation exists between an entity span pair (ignoring specific relation types). We build a binary relation model which predicts a label in {0, 1} to indicate the existence of a candidate relation based on relation node embedding. Given a relation node rij in a sentence s, to get the posterior of the binary relation label ˆb, we apply softmax layer on the relation node embedding Hrij, P(ˆb|rij, s) = Softmax(WbinHrij), where Wbin is the parameter. The training objective is to minimize Lbin = − X rij log P(ˆb = b|rij, s) # candidate relations rij , (4) where true binary annotations b are transformed from the original typed relation labels. Formally, the adjacency matrix A is defined as • if P(ˆb = 1|rij, s) > 0.5, we set the value of A between entity nodes ei, ej and relation node rij to 1.0, • the diagonal elements of A are set to 1.0, • while others are set to 0.0. To compare with hard binary value A, we also try the soft value A in experiments. It means that we set the value of A between entity nodes ei, ej and relation node rij to the probability P(ˆb = 1|rij, s) except for the diagonal elements (they are set to 1.0). Here, we introduce how to compute two types of contextualized node embedding in the graph Gs: entity node embedding and relation node embedding. Entity Node Embedding Given an entity span e ∈ˆE, for each word wi ∈e, we first collect wi’s biLSTM hidden vector hi from entity span model. Then, we use a CNN (a single convolution layer with a max-pooling layer) with a multi-layer perceptron (MLP) on vectors {hi|wi ∈e} to obtain the resulting d-dimensional entity span node embedding He (H is a matrix mentioned before in Section 2), as shown in the left part of Figure 4. Relation Node Embedding Given a candidate relation r12, we extract two types of features, namely, features regarding words in e1, e2 and features regarding contexts of the entity span pair (e1, e2). For features on words in e1, e2, we simply use entity node embedding He1 and He2. For context features of the entity span pair (e1, e2), we build three feature vectors by looking at words between e1 and e2, words on the left of the pair and words on the right of the pair. Similarly, we build three features by running another CNN with 1365 𝒉𝟐𝒉𝟑 CNN + MLP CNN + MLP CNN + MLP CNN + MLP CNN + MLP CNN + MLP ⊕ MLP 𝒉𝟏 𝒉𝟐𝒉𝟑 𝒉𝟒 𝒉𝟓𝒉𝟔 𝒉𝟕𝒉𝟖 𝑒1 𝑒2 𝑒1 Entity Embedding Relation Embedding Figure 4: Our node embedding extractor. an MLP. Finally, the five feature vectors are concatenated to a single vector. To get d-dimensional relation node embedding Hr12, we apply an MLP on the single vector, as shown in the right part of Figure 4. 3.3 Joint Type Inference After building the entity-relation bipartite graph, we feed the graph into a multi-layer GCNs to obtain the node-level output Z. For each row in Z (entity or relation node representation), it can gather and summarize information from other nodes in the graph Gs although there is no direct entity-entity or relation-relation edges in the graph. Then the final node representation F of graph Gs is concatenated by the input node embedding H and the node-level output Z (H, Z and F are matrices). Given an entity node ei and a relation node rij, to predict the corresponding node types, we pass the resulted node representation into two fully connected layer with a softmax function, respectively, P(ˆy|ei, s) = Softmax(WentFei), P(ˆl|rij, s) = Softmax(WrelFrij), where Went, Wrel are parameters. And the training objective is to minimize Lent = −1 | ˆE| X ei∈ˆE log P(ˆy = y|ei, s), (5) Lrel = − X rij log P(ˆl = l|rij, s) # candidate relations rij , (6) where the true label y, l can be read from annotations, as shown in Figure 3. 3.4 Training To train the joint model, we optimize the combined objective function L = Lspan + Lbin + Lent +Lrel, where the training is accomplished by the shared parameters. We employ the scheduled sampling strategy (Bengio et al., 2015) in the entity model similar to (Miwa and Bansal, 2016). We optimize our model using Adadelta (Zeiler, 2012) with gradient clipping. The network is regularized with dropout. Within a fixed number of epochs, we select the model according to the best relation performance on development sets5. 4 Experiments We conduct experiments on ACE05 dataset, which is a standard corpus for the entity relation extraction task. It includes 7 entity types and 6 relation types between entities. We use the same data split of ACE05 documents as previous work (351 training, 80 development and 80 testing) (Miwa and Bansal, 2016). We evaluate the performances using precision (P), recall (R) and F1 scores following (Miwa and Bansal, 2016; Sun et al., 2018). Specifically, an output entity (e, y) is correct if its type y and the region of its head e are correct, and an output relation r is correct if its (e1, y1, e2, y2, l) are correct ( i.e., exact match). In this paper, the default setting “GCN” is the 1-layer GCN-based joint model with the dynamic hard adjacency matrix, which achieves the best relation performance on ACE05 dataset. 4.1 End-to-End Results on ACE05 First, we compare proposed models with previous work in Table 1. In general, our “GCN” achieves the best entity performance 84.2 percent comparing with existing joint models. For relation performance, our “GCN” significantly outperforms all joint models except for (Sun et al., 2018) which uses more complex joint decoder. Comparing with our basic neural network “NN”, our “GCN” has large improvement both on entities and relations. Those observations demonstrate the effectiveness of our “GCN” for capturing information on multiple entity types and relation types from a sentence. 5 Our word embeddings is initialized with 100dimensional glove (Pennington et al., 2014) word embeddings. The dimensionality of the hidden units and node embedding are set to 128. For all CNN in our network, the kernel sizes are 2 and 3, and the output channels are 25. 1366 Model Entity Relation P R F P R F L&J (2014) 85.2 76.9 80.8 65.4 39.8 49.5 Zhang (2017) 83.5 57.5 Sun (2018) 83.9 83.2 83.6 64.9 55.1 59.6 M&B (2016) 82.9 83.9 83.4 57.2 54.0 55.6 K&C (2017) 84.0 81.3 82.6 55.5 51.8 53.6 NN 85.7 82.1 83.9 65.6 50.7 57.2 GCN 86.1 82.4 84.2 68.1 52.3 59.1 Table 1: Results on the ACE05 test data. Li and Ji (2014) Zhang et al. (2017) and Sun et al. (2018) are joint decoding algorithms. Miwa and Bansal (2016) and Katiyar and Cardie (2017) are joint training systems without joint decoding. “NN” is our neural network model without GCN. “GCN” is dynamic hard GCN-based neural network. We omit pipeline methods which underperform joint models (see (Li and Ji, 2014) for details). Compared to the state-of-the-art method which adopts minimum risk training (Sun et al., 2018), our “GCN” has better entity performance and comparable relation performance. Different from existing joint decoding systems, we do not use complex joint decoding algorithms such as beam search (Li and Ji, 2014), global normalization (Zhang et al., 2017) and minimum risk training (Sun et al., 2018). Our models only rely on sharing parameters similar to (Miwa and Bansal, 2016; Katiyar and Cardie, 2017). It is worth noting that the precision of our “GCN” is high compared to all the other methods. We attribute the phenomenon to the strong ability to model feature representations of entity nodes and relation nodes. Next, we evaluate our model with different settings. As mentioned in Section 3.2, we have three types of graph: “GCN (static)”, “GCN (dynamic + hard)” and “GCN (dynamic + soft)”. The last three rows of Table 3 show their performances. We have three observations regarding the Table 3. 1. Compared with “Sun (NN)” model which is the base neural network without minimum risk training (Sun et al., 2018), our “NN” performs better 0.5 point on entities. One reason might be the entity type model and the relation type model share more parameters (entity CNN+MLP parameters), while “Sun (NN)” only shares biLSTM hidden states. However, our “NN” performs within 0.6 point on relations. One possible reason might be that we do not use the features of output entity type for relation type classification. 2. After introducing graph convolutional networks, all three GCN-based models improve per1-layer 2-layer 3-layer F1 of Entity Span 90.4 90.5 90.7 F1 of Binary Relation 61.5 62.9 62.8 F1 of Entity 81.6 82.1 82.2 F1 of Relation 53.8 53.5 53.6 Table 2: Results on the ACE05 development set with respect to the number of GCN layers. formances of entity and relation. Specifically, The “GCN (static)” has been slightly improved on relations. The “GCN (dynamic + soft)” achieves 0.7 percent improvement on relations and has the same entity performance. The “GCN (dynamic + hard)” improves the entity performance (0.4 percent) 6 and achieves large improvement (1.9 percent) in relation performance. It is competitive with state-of-the-art model (Sun et al., 2018). These observations show that the proposed joint model is effective for the joint type inference on entities and relations, and also show the rationality of the proposed dynamic graph, as expected. 3. The performances of the entity span and the binary relation are close to all proposed models. One possible reason is that there are more coarsegained task. Effective features can be easily extracted for all models. It is worth noting that the performance in binary relation is not very good. Our dynamic graph relies on binary relation detection task. How to improve the performance of binary relation is still a hard question. We leave it as future work. Thirdly, we present the influences of the number of GCN layers (Table 2). We take the “GCN (dynamic + hard)” as a example. In general, the performances on four tasks are insensitive to the number of GCN layers 7. In particular, the performances on entity span, entity and relation fluctuate at 1.0 points, and the binary relation fluctuate at 1.4 points. Interestingly, we find the one layer GCN achieves best relation performance though the performances of other three tasks are not best. One possible reason is that the all models are closely related to each other. However, how they 6 In fact, the entity performance on the ACE05 test data is hard to improve from past works (Miwa and Bansal, 2016; Zhang et al., 2017; Sun et al., 2018). So it is a non-negligible improvement over existing state-of-the-art systems. 7 We focus on the performance of the end-to-end relation extraction, so we select models by the relation extraction results. It is also possible to consider both the performances of the entity model and the relation model. We leave the study of advanced model selection algorithms for future work. 1367 Model Entity Relation Entity Span Binary Relation P R F P R F P R F P R F Sun (NN) (2018) 84.0 82.9 83.4 59.5 56.3 57.8 NN 85.7 82.1 83.9 65.6 50.7 57.2 91.2 89.6 90.4 GCN (static) 85.0 82.6 83.8 66.6 51.3 57.8 90.8 90.2 90.5 GCN (dynamic + soft) 85.3 82.3 83.8 67.3 51.6 58.5 90.8 90.2 90.5 77.3 56.4 65.2 GCN (dynamic + hard) 86.1 82.4 84.2 68.1 52.3 59.1 91.2 89.5 90.4 78.2 56.3 65.4 Table 3: Results on the ACE05 dataset in different settings. 1 (288) 2 (163) 3 (72) 4 (33) 5 (15) 6) the number of relations for each sentence 50 55 60 65 70 F1 score NN GCN(static) GCN(dynamic+soft) GCN(dynamic+hard) Figure 5: F1 scores with respect to the number of relations for each sentence. The numbers in parentheses are counts of sentences in the ACE05 test set. affect each other in this joint settings is still an open question. Forthly, we examine the relation performance with respect to different the number of relations for each sentence (Figure 5). In general, our GCNbased models almost outperform “NN” when the number of relations is larger than 2. It proves that the proposed GCN-based models are more suitable for handle multiple relations in a sentence. We think our method will perform better on the complex multiple relations dataset which is very common in reality. Finally, We compare the “NN” model with the “GCN” model on some concrete examples, as shown in Table 5. For S1, the “NN” wrongly identifies the relation GEN-AFF between “[legislature]ORG” and “[north korea]GPE” even though the relation ORG-AFF between “[legislature]ORG” and “[chairman]PER” is detected. For S2, the “NN” does not detect PART-WHOLE relation while the “GCN” correctly find it. These two observations show that our “GCN” is good at dealing with the situation when the multiple relations share common entities, as expected. For S3, our “GCN” identifies a PHYS relation between “[units]PER” and “[captial]GPE”, Model Relation M&B (2016) 70.1 61.2 65.3 C&M (2018) 69.7 59.5 64.2 NN 68.5 62.8 65.5 GCN (static) 69.1 63.8 66.4 GCN (dynamic + soft) 68.7 63.4 65.9 GCN (dynamic + hard) 68.7 65.4 67.0 Table 4: Results on the ACE05 dataset with golden entity. while the “NN” does not find this relation even the entities are correct. However, both models do not identify the relation ART between “[units]PER” and “[weapons]WEA”. We think advanced improvement methods which use more powerful graph neural network might be helpful in this situation. 4.2 Golden Entity Results on ACE05 In order to compare with relation classification methods, we evaluate our models with golden entities on ACE05 corpus in Table 4. We use the same data split to compare with their model (Miwa and Bansal, 2016; Christopoulou et al., 2018). We do not tune hyperparameters extensively. For example, we use the same setting in both end-to-end and golden entity rather than tune parameters on each of them. The baseline systems are (Miwa and Bansal, 2016) and (Christopoulou et al., 2018). In general, our “NN” is competitive, comparing to the dependency tree-based state-of-the-art model (Miwa and Bansal, 2016). It shows that our CNN-based neural networks are able to extract more powerful features to help relation extraction task. After adding GCN, our GCN-based models achieve the better performance. This indicates that the proposed models can achieve large improvement without any external syntactic tools 8. 8For simplicity, we do not extract golden entity type features explicitly in our model. And we believe there will be further improvements when these features are used. 1368 S1 the [british]GPE:♥♣♠ GEN-AFF-2:♥♣♠ [arm]ORG:♥♣♠ PART-WHOLE-1:♥♠|GEN-AFF-1:♥♣♠ of french distributors [pathe]ORG:♥♣♠ PART-WHOLE-2:♥♠to show four releases . S2 . . . [chairman]PER:♥♣♠ ORG-AFF-1:♥♣♠ of [north korea ]GPE:♥♣♠ PART-WHOLE-2:♥♠|GEN-AFF-2:♣ ’s [legislature]ORG:♥♣♠ PART-WHOLE-1:♥♠|ORG-AFF-2:♥♣♠|GEN-AFF-1:♣, the supreme people ’s assembly . S3 a red line may have been drawn around the [capital]GPE:♥♣♠ PHYS-2:♥♠ with [republican gurad]ORG:♥♣♠ ORG-AFF-2:♥♣♠ [units]PER:♥♣♠ PHYS-1:♥♠|ORG-AFF-1:♥♣♠|ART-1:♥ ordered to use chemical [weapons]WEA:♥♣♠ ART-2:♥once u.s. and allied troops cross it . Table 5: Examples from the ACE05 dataset with label annotations from “NN” model and “GCN” model for comparison. The ♥is the gold standard, and the ♣, ♠are the output of the “NN” ,“GCN” model respectively. 5 Related Work There have been extensive studies for entity relation extraction task. Early work employs a pipeline of methods that extracts entities first, and then determines their relations (Zelenko et al., 2003; Miwa et al., 2009; Chan and Roth, 2011; Lin et al., 2016). As pipeline approaches suffer from error propagation, researchers have proposed methods for joint entity relation extraction. Parameter sharing is a basic strategy for joint extraction. For example, Miwa and Bansal (2016) propose a neural method comprised of a sentencelevel RNN for extracting entities, and a dependency tree-based RNN to predict relations. Their relation model takes hidden states of the entity model as features (i.e., the shared parameters). Similarly, Katiyar and Cardie (2017) use a simplified relation model based on the entity RNN using the attention mechanism. These joint methods do joint learning through sharing parameters and they have no explicit interaction in type inference. To further explore interactions between the entity decoder and the relation decoder, many of them focus on some joint decoding algorithms. ILP-based joint decoder (Yang and Cardie, 2013), CRF-based joint decoder (Katiyar and Cardie, 2016), joint sequence labelling tag set (Zheng et al., 2017), beam search (Li and Ji, 2014), global normalization (Zhang et al., 2017), and transition system (Wang et al., 2018) are investigated. Different from models there, we propose a novel and concise joint model to handle joint type inference based on graph convolutional networks, which can capture information between multiple entity types and relation types explicitly9. 9In addition, transfer learning(Sun and Wu, 2019), multiRecently, researches of graph neural networks (GNNs) have been receiving more and more attention because of the great expressive power of graphs (Cai et al., 2018; Battaglia et al., 2018; Zhou et al., 2018). Graph Convolutional Network (GCN) is one of the typical variants of GNN (Bruna et al., 2013; Defferrard et al., 2016; Kipf and Welling, 2017). It has been successfully applied to many NLP tasks such as text classification (Yao et al., 2018), semantic role labeling (Marcheggiani and Titov, 2017), relation extraction (Zhang et al., 2018) machine translation (Bastings et al., 2017) and knowledge base completion (Shang et al., 2018). We note that most previous applications of GCN focus on a single job, while the joint entity relation extraction consists of multiple sub-tasks. Investigating GCN in joint learning scenarios is the main topic of this work. A closely related work is (Christopoulou et al., 2018), which focuses on relation extraction with golden entities. Our work can be viewed as an end-to-end extension of their work. 6 Conclusion We propose a novel and concise joint model based on GCN to perform joint type inference for entity relation extraction task. Compared with existing joint methods, it provides a new way to capture the interactions on multiple entity types and relation types explicitly in a sentence. Experiments on ACE05 dataset show the effectiveness of the proposed method. task learning (Sanh et al., 2018) for this task were also studied. In order to make a fair comparison, we do not include these models in experiments. 1369 Acknowledgement The authors wish to thank the reviewers for their helpful comments and suggestions. This research is (partially) supported by STCSM (18ZR1411500), NSFC(61673179) and the Foundation of State Key Laboratory of Cognitive Intelligence, iFLYTEK(COGOS-20190003). The corresponding authors are Yuanbin Wu and Shiliang Sun. References Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1957–1967. Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. 2018. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pages 1171–1179. Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. 2013. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203. Hongyun Cai, Vincent W Zheng, and Kevin ChenChuan Chang. 2018. A comprehensive survey of graph embedding: Problems, techniques, and applications. IEEE Transactions on Knowledge and Data Engineering, 30(9):1616–1637. Yee Seng Chan and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 551–560, Portland, Oregon, USA. Association for Computational Linguistics. Fenia Christopoulou, Makoto Miwa, and Sophia Ananiadou. 2018. A walk-based model on entity graphs for relation extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 81–88. Association for Computational Linguistics. Micha¨el Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pages 3844–3852. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991. Arzoo Katiyar and Claire Cardie. 2016. Investigating lstms for joint extraction of opinion entities and relations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 919–929, Berlin, Germany. Association for Computational Linguistics. Arzoo Katiyar and Claire Cardie. 2017. Going out on a limb: Joint extraction of entity mentions and relations without dependency trees. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 917–928, Vancouver, Canada. Association for Computational Linguistics. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR). Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proc. of ACL, pages 402–412. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2124–2133, Berlin, Germany. Association for Computational Linguistics. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506–1515. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105–1116, Berlin, Germany. Association for Computational Linguistics. Makoto Miwa, Rune Sætre, Yusuke Miyao, and Jun’ichi Tsujii. 2009. A rich feature vector for protein-protein interaction extraction from multiple corpora. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 121–130, Singapore. Association for Computational Linguistics. 1370 Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proc. of EMNLP, pages 1532– 1543. Victor Sanh, Thomas Wolf, and Sebastian Ruder. 2018. A hierarchical multi-task approach for learning embeddings from semantic tasks. arXiv preprint arXiv:1811.06031. Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, and Bowen Zhou. 2018. Endto-end structure-aware convolutional networks for knowledge base completion. arXiv preprint arXiv:1811.04441. Changzhi Sun and Yuanbin Wu. 2019. Distantly supervised entity relation extraction with adapted manual annotations. In Thirty-Third AAAI Conference on Artificial Intelligence. Changzhi Sun, Yuanbin Wu, Man Lan, Shiliang Sun, Wenting Wang, Kuang-Chih Lee, and Kewen Wu. 2018. Extracting entities and relations with joint minimum risk training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2256–2265. Association for Computational Linguistics. Shaolei Wang, Yue Zhang, Wanxiang Che, and Ting Liu. 2018. Joint extraction of entities and relations based on a novel graph scheme. In IJCAI, pages 4461–4467. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1640–1649. Liang Yao, Chengsheng Mao, and Yuan Luo. 2018. Graph convolutional networks for text classification. arXiv preprint arXiv:1809.05679. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Journal of machine learning research, 3(Feb):1083–1106. Meishan Zhang, Yue Zhang, and Guohong Fu. 2017. End-to-end neural relation extraction with global optimization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1731–1741, Copenhagen, Denmark. Association for Computational Linguistics. Yuhao Zhang, Peng Qi, and Christopher D Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205–2215. Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extraction of entities and relations based on a novel tagging scheme. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1227–1236, Vancouver, Canada. Association for Computational Linguistics. Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, and Maosong Sun. 2018. Graph neural networks: A review of methods and applications. arXiv preprint arXiv:1812.08434.
2019
131
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1371–1377 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1371 Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers Haoyu Wang∗† Ming Tan∗† Mo Yu∗‡ Shiyu Chang‡ Dakuo Wang‡ Kun Xu§ Xiaoxiao Guo‡ Saloni Potdar† †IBM Watson ‡IBM Research §Tencent AI Lab Abstract The state-of-the-art solutions for extracting multiple entity-relations from an input paragraph always require a multiple-pass encoding on the input. This paper proposes a new solution that can complete the multiple entityrelations extraction task with only one-pass encoding on the input corpus, and achieve a new state-of-the-art accuracy performance, as demonstrated in the ACE 2005 benchmark. Our solution is built on top of the pre-trained self-attentive models (Transformer). Since our method uses a single-pass to compute all relations at once, it scales to larger datasets easily; which makes it more usable in real-world applications. 1 1 Introduction Relation extraction (RE) aims to find the semantic relation between a pair of entity mentions from an input paragraph. A solution to this task is essential for many downstream NLP applications such as automatic knowledge-base completion (Surdeanu et al., 2012; Riedel et al., 2013; Verga et al., 2016), knowledge base question answering (Yih et al., 2015; Xu et al., 2016; Yu et al., 2017), and symbolic approaches for visual question answering (Mao et al., 2019; Hu et al., 2019), etc. One particular type of the RE task is multiplerelations extraction (MRE) that aims to recognize relations of multiple pairs of entity mentions from an input paragraph. Because in real-world applications, whose input paragraphs dominantly contain multiple pairs of entities, an efficient and effective solution for MRE has more important and more practical implications. However, nearly all existing approaches for MRE tasks (Qu et al., ∗Equal contributions from the corresponding authors: {wanghaoy,mingtan,yum}@us.ibm.com. Part of work was done when Kun was at IBM. 1https://github.com/helloeve/mre-in-one-pass. ×12 … … Entity-aware Self-attention+ Feed-forward … in south suburbs of Bagh #dad and Iraqi artillery fired … Linear PART-WHOLE(e1,e2) ART(e1,e2) Pool Pool Linear Figure 1: Model Architecture. Different pairs of entities, e.g., (Iraqi and artillery), (southern suburbs, Baghdad) are predicted simultaneously. 2014; Gormley et al., 2015; Nguyen and Grishman, 2015) adopt some variations of the singlerelation extraction (SRE) approach, which treats each pair of entity mentions as an independent instance, and requires multiple passes of encoding for the multiple pairs of entities. The drawback of this approach is obvious – it is computationally expensive and this issue becomes more severe when the input paragraph is large, making this solution impossible to implement when the encoding step involves deep models. This work presents a solution that can resolve the inefficient multiple-passes issue of existing solutions for MRE by encoding the input only once, which significantly increases the efficiency and scalability. Specifically, the proposed solution is built on top of the existing transformer-based, pretrained general-purposed language encoders. In this paper we use Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) as the transformer-based encoder, but this solution is not limited to using BERT alone. The two novel modifications to the original BERT architecture are: (1) we introduce a structured prediction layer for predicting multiple relations for different entity pairs; and (2) we make the selfattention layers aware of the positions of all en1372 tities in the input paragraph. To the best of our knowledge, this work is the first promising solution that can solve MRE tasks with such high efficiency (encoding the input in one-pass) and effectiveness (achieve a new state-of-the-art performance), as proved on the ACE 2005 benchmark. 2 Background MRE is an important task as it is an essential prior step for many downstream tasks such as automatic knowledge-base completion and questionanswering. Popular MRE benchmarks include ACE (Walker et al., 2006) and ERE (Linguistic Data Consortium, 2013). In MRE, given as a text paragraph x = {x1, . . . , xN} and M mentions e = {e1, . . . , eM} as input, the goal is to predict the relation rij for each mention pair (ei, ej) either belongs to one class of a list of pre-defined relations R or falls into a special class NA indicating no relation. This paper uses “entity mention”, “mention” and “entity” interchangeably. Existing MRE approaches are based on either feature and model architecture selection techniques (Xu et al., 2015; Gormley et al., 2015; Nguyen and Grishman, 2015; F. Petroni and Gemulla, 2015; Sorokin and Gurevych, 2017; Song et al., 2018b), or domain adaptations approaches (Fu et al., 2017; Shi et al., 2018). But these approaches require multiple passes of encoding over the paragraph, as they treat a MRE task as multiple passes of a SRE task. 3 Proposed Approach This section describes the proposed one-pass encoding MRE solution. The solution is built upon BERT with a structured prediction layer to enable BERT to predict multiple relations with onepass encoding, and an entity-aware self-attention mechanism to infuse the relational information with regard to multiple entities at each layer of hidden states. The framework is illustrated in Figure 1. It is worth mentioning that our solution can easily use other transformer-based encoders besides BERT, e.g. (Radford et al., 2018). 3.1 Structured Prediction with BERT for MRE The BERT model has been successfully applied to various NLP tasks. However, the final prediction layers used in the original model is not applicable to MRE tasks. The MRE task essentially requires to perform edge predictions over a graph with entities as nodes. Inspired by (Dozat and Manning, 2018; Ahmad et al., 2018), we propose that we can first encode the input paragraph using BERT. Thus, the representation for a pair of entity mentions (ei, ej) can be denoted as oi and oj respectively. In the case of a mention ei consist of multiple hidden states (due to the byte pair encoding), oi is aggregated via average-pooling over the hidden states of the corresponding tokens in the last BERT layer. We then concatenate oi and oj denoted as [oi : oj], and pass it to a linear classifier2 to predict the relation P(rij|x, ei, ej) = softmax(WL[oi : oj] + b), (1) where WL ∈R2dz×l. dz is the dimension of BERT embedding at each token position, and l is the number of relation labels. 3.2 Entity-Aware Self-Attention based on Relative Distance This section describes how we encode multiplerelations information into the model. The key concept is to use the relative distances between words and entities to encode the positional information for each entity. This information is propagated through different layers via attention computations. Following (Shaw et al., 2018), for each pair of word tokens (xi, xj) with the input representations from the previous layer as hi and hj, we extend the computation of self-attention zi as: zi = N X j=1 exp eij PN k=1 exp eik (hjWV + aV ij), (2) where eij = hiWQ(hjWK + aK ij)/ √ dz. (3) WQ, WK, WV ∈Rdz×dz are the parameters of the model, and dz is the dimension of the output from the self-attention layer. Compared to standard BERT’s self-attention, aV ij, aK ij ∈Rdz are extra, which could be viewed as the edge representation between the input element xi and xj . Specifically, we devise aV ij and aK ij to encourage each token to be aware of the relative distance to different entity mentions, and vice versa. 2We also tried to use MLP and Biaff instead of the linear layer for the classification, which do not show better performance compared to the linear classier, as shown in the experiment section. We hypothesize that this is because the embeddings learned from BERT are powerful enough for linear classifiers. Further experiments is needed to verify this. 1373 in of suburbs Bagh and fired artillery Iraqi ##dad in suburbs of Bagh ##dad and Iraqi artillery fired Zero Vector !"($%&) !"(&%$) south south Figure 2: Illustration of the tensor {aK ij} introduced in selfattention computation. Each red cell embedding is defined by wd(i−j), as the distance from entity xi to token xj. Each blue cell embedding is defined by wd(j−i), as the distance from the entity xj to token xi . White cells are zero embeddings since neither xi nor xj is entity. The {aV ij} follows the same pattern with independent parameters. Adapted from (Shaw et al., 2018), we argue that the relative distance information will not help if the distance is beyond a certain threshold. Hence we first define the distance function as: d(i, j) = min(max(−k, (i −j)), k). (4) This distance definition clips all distances to a region [−k, k]. k is a hyper-parameter to be tuned on the development set. We can now define aV ij and aK ij formally as: aV ij, aK ij =      wV d(i,j), wK d(i,j), if xi ∈e wV d(j,i), wK d(j,i), if xj ∈e 0, else. (5) As defined above, if either token xi or xj belongs to an entity, we will introduce a relative positional representation according to their distance. The distance is defined in an entity-centric way as we always compute the distance from the entity mention to the other token. If neither xi nor xj are entity mentions, we explicitly assign a zero vector to aK ij and aV ij. When both xi and xj are inside entity mentions, we take the distance as d(i, j) to make row-wise attention computation coherent as depicted in Figure 2. During the model fine-tuning, the newly introduced parameters {wK −k, ..., wK k } and {wV −k, ..., wV k } are trained from scratch. 4 Experiments We demonstrate the advantage of our method on a popular MRE benchmark, ACE 2005 (Walker et al., 2006), and a more recent MRE benchmark, SemEval 2018 Task 7 (G´abor et al., 2018). We also evaluate on a commonly used SRE benchmark SemEval 2010 task 8 (Hendrickx et al., 2009), and achieve state-of-the-art performance. 4.1 Settings Data For ACE 2005, we adopt the multi-domain setting and split the data following (Gormley et al., 2015): we train on the union of news domain (nw and bn), tune hyperparameters on half of the broadcast conversation (bc) domain, and evaluate on the remainder of broadcast conversation (bc), the telephone speech (cts), usenet newsgroups (un), and weblogs (wl) domains. For SemEval 2018 Task 7, we evaluate on its sub-task 1.1. We use the same data split in the shared task. The passages in this task is usually much longer compared to ACE. Therefore we adopt the following pre-processing step – for the entity pair in each relation, we assume the tokens related to their relation labeling are always within a range from the fifth token ahead of the pair to the fifth token after it. Therefore, the tokens in the original passage that are not covered by the range of ANY input relations, will be removed from the input. Methods We compare our solution with previous works that predict a single relation per pass (Gormley et al., 2015; Nguyen and Grishman, 2015; Fu et al., 2017; Shi et al., 2018), our model that predicts single relation per pass for MRE, and with the following naive modifications of BERT that could achieve MRE in one-pass. • BERTSP: BERT with structured prediction only, which includes proposed improvement in 3.1. • Entity-Aware BERTSP: our full model, which includes both improvements in §3.1 and §3.2. • BERTSP with position embedding on the final attention layer. This is a more straightforward way to achieve MRE in one-pass derived from previous works using position embeddings (Nguyen and Grishman, 2015; Fu et al., 2017; Shi et al., 2018). In this method, the BERT model encode the paragraph to the last attention-layer. Then, for each entity pair, it takes the hidden states, adds the relative position embeddings corresponding to the target entities, and finally makes the relation prediction for this pair. • BERTSP with entity indicators on input layer: it replaces our structured attention layer, and adds indicators of entities (transformed to embeddings) 1374 Method dev bc cts wl avg Baselines w/o Domain Adaptation (Single-Relation per Pass) Hybrid FCM (Gormley et al., 2015) 63.48 56.12 55.17 58.26 Best published results w/o DA (from Fu et al.) 64.44 54.58 57.02 58.68 BERT fine-tuning out-of-box 3.66 5.56 5.53 1.67 4.25 Baselines w/ Domain Adaptation (Single-Relation per Pass) Domain Adversarial Network (Fu et al., 2017) 65.16 55.55 57.19 59.30 Genre Separation Network (Shi et al., 2018) 66.38 57.92 56.84 60.38 Multi-Relation per Pass BERTSP (our model in §3.1) 64.42 67.09 53.20 52.73 57.67 Entity-Aware BERTSP (our full model) 67.46 69.25 61.70 58.48 63.14 BERTSP w/ entity-indicator on input-layer 65.32 66.86 57.65 53.56 59.36 BERTSP w/ pos-emb on final att-layer 67.23 69.13 58.68 55.04 60.95 Single-Relation per Pass BERTSP (our model in §3.1) 65.13 66.95 55.43 54.39 58.92 Entity-Aware BERTSP (our full model) 68.90 68.52 63.71 57.20 63.14 BERTSP w/ entity-indicator on input-layer 67.12 69.76 58.05 56.27 61.36 Table 1: Main Results on ACE 2005. directly to each token’s word embedding3. This method is an extension of (Verga et al., 2018) to the MRE scenario. Hyperparameters For our experiments, most model hyperparameters are the same as in pretraining. We tune the training epochs and the new hyperparameter k (in Eq. 4) on the development set of ACE 2005. Since the SemEval task has no development set, we use the best hyperparameters selected on ACE. For the number of training epochs, we make the model pass similar number of training instances as in ACE 2005. 4.2 Results on ACE 2005 Main Results Table 1 gives the overall results on ACE 2005. The first observation is that our model architecture achieves much better results compared to the previous state-of-the-art methods. Note that our method was not designed for domain adaptation, it still outperforms those methods with domain adaptation. This result further demonstrates its effectiveness. Among all the BERT-based approaches, finetuning the off-the-shelf BERT does not give a satisfying result, because the sentence embeddings cannot distinguish different entity pairs. The simpler version of our approach, BERTSP, can successfully adapt the pre-trained BERT to the MRE task, and achieves comparable performance at the 3Note the usage of relative position embeddings does not work for one-pass MRE, since each word corresponds to a varying number of position embedding vectors. Summing up the vectors confuses this information. It works for the singlerelation per pass setting, but the performance lags behind using only indicators of the two target entities. prior state-of-the-art level of the methods without domain adaptation. Our full model, with the structured fine-tuning of attention layers, brings further improvement of about 5.5%, in the MRE one-pass setting, and achieves a new state-of-the-art performance when compared to the methods with domain adaptation. It also beats the other two methods on BERT in Multi-Relation per Pass. Performance Gap between MRE in One-Pass and Multi-Pass The MRE-in-one-pass models can also be used to train and test with one entity pair per pass (Single-Relation per Pass results in Table 1). Therefore, we compare the same methods when applied to the multi-relation and singlerelation settings. For BERTSP with entity indicators on inputs, it is expected to perform slightly better in the single-relation setting, because of the mixture of information from multiple pairs. A 2% gap is observed as expected. By comparison, our full model has a much smaller performance gap between two different settings (and no consistent performance drop over different domains). The BERTSP is not expected to have a gap as shown in the table.For BERTSP with position embeddings on the final attention layer, we train the model in the single-relation setting and test with two different settings, so the results are the same. Training and Inference Time Through our experiment,4 we verify that the full model with MRE is significantly faster compared to all other methods for both training and inference. The training 4All evaluations were done on a single Tesla K80 GPU. 1375 Method dev bc cts wl avg Linear 67.46 69.25 61.70 58.48 63.14 MLP 67.16 68.52 61.16 54.72 61.47 Biaff 67.06 68.22 60.39 55.60 61.40 Table 2: Our model with different prediction modules. time for full model with MRE is 3.5x faster than it with SRE. As for inference speed, the former could reach 126 relation per second compared the later at 23 relation per second. It is also much faster when compared to the second best performing approach, BERTSP w/ pos-emb on final attlayer, which is at 76 relation per second, as it runs the last layer for every entity pair. Prediction Module Selection Table 2 evaluates the usage of different prediction layers, including replacing our linear layer in Eq.(1) with MLP or Biaff. Results show that the usage of the linear predictor gives better results. This is consistent with the motivation of the pre-trained encoders: by unsupervised pre-training the encoders are expected to be sufficiently powerful thus adding more complex layers on top does not improve the capacity but leads to more free parameters and higher risk of over-fitting. 4.3 Results on SemEval 2018 Task 7 The results on SemEval 2018 Task 7 are shown in Table 3. Our Entity-Aware BERTSP gives comparable results to the top-ranked system (Rotsztejn et al., 2018) in the shared task, with slightly lower Macro-F1, which is the official metric of the task, and slightly higher Micro-F1. When predicting multiple relations in one-pass, we have 0.9% drop on Macro-F1, but a further 0.8% improvement on Micro-F1. Note that the system (Rotsztejn et al., 2018) integrates many techniques like feature-engineering, model combination, pretraining embeddings on in-domain data, and artificial data generation, while our model is almost a direct adaption from the ACE architecture. On the other hand, compared to the top singlemodel result (Luan et al., 2018), which makes use of additional word and entity embeddings pretrained on in-domain data, our methods demonstrate clear advantage as a single model. 4.4 Additional SRE Results We conduct additional experiments on the relation classification task, SemEval 2010 Task 8, to comMethod Averaged F1 Macro Micro Top 3 in the Shared Task (Rotsztejn et al., 2018) 81.7 82.8 (Luan et al., 2018) 78.9 (Nooralahzadeh et al., 2018) 76.7 Ours (single-per-pass) 81.4 83.1 Ours (multiple-per-pass) 80.5 83.9 Table 3: Results on SemEval 2018 Task 7, Sub-Task 1.1. Method Macro-F1 Best published result (Wang et al., 2016) 88.0 BERT out-of-box 80.9 Entity-Aware BERT 88.8 BERTSP 88.8 Entity-Aware BERTSP 89.0 Table 4: Additional Results on SemEval 2010 Task 8. pare with models developed on this benchmark. From the results in Table 4, our proposed techniques also outperforms the state-of-the-art on this single-relation benchmark. On this single relation task, the out-of-box BERT achieves a reasonable result after finetuning. Adding the entity-aware attention gives about 8% improvement, due to the availability of the entity information during encoding. Adding structured prediction layer to BERT (i.e., BERTSP) also leads to a similar amount of improvement. However, the gap between BERTSP method with and without entity-aware attention is small. This is likely because of the bias of data distribution: the assumption that only two target entities exist, makes the two techniques have similar effects. 5 Conclusion In summary, we propose a first-of-its-kind solution that can simultaneously extract multiple relations with one-pass encoding of an input paragraph for MRE tasks. With the proposed structured prediction and entity-aware self-attention layers on top of BERT, we achieve a new state-of-the-art results with high efficiency on the ACE 2005 benchmark. Our idea of encoding a passage regarding multiple entities has potentially broader applications beyond relation extraction, e.g., entity-centric passage encoding in question answering (Song et al., 2018a). In the future work, we will explore the usage of this method with other applications. 1376 References Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2018. Near or far, wide range zero-shot crosslingual dependency parsing. arXiv preprint arXiv:1811.00570. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Timothy Dozat and Christopher D Manning. 2018. Simpler but more accurate semantic dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 484–490. L. Del Corro F. Petroni and R. Gemulla. 2015. Core: Context-aware open relation extraction with factorization machines. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Lisheng Fu, Thien Huu Nguyen, Bonan Min, and Ralph Grishman. 2017. Domain adaptation for relation extraction with domain adversarial neural network. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 425–429. Kata G´abor, Davide Buscaldi, Anne-Kathrin Schumann, Behrang QasemiZadeh, Haifa Zargayouna, and Thierry Charnois. 2018. Semeval-2018 task 7: Semantic relation extraction and classification in scientific papers. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 679–688. Matthew R Gormley, Mo Yu, and Mark Dredze. 2015. Improved relation extraction with feature-rich compositional embedding models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1774–1784. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions, pages 94–99. Association for Computational Linguistics. Ronghang Hu, Anna Rohrbach, Trevor Darrell, and Kate Saenko. 2019. Language-conditioned graph networks for relational reasoning. arXiv preprint arXiv:1905.04405. Linguistic Data Consortium. 2013. Deft ere annotation guidelines: Relations v1.1. 05.17.2013. Yi Luan, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. The UWNLP system at SemEval-2018 task 7: Neural relation extraction model with selectively incorporated concept embeddings. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 788–792, New Orleans, Louisiana. Association for Computational Linguistics. Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B Tenenbaum, and Jiajun Wu. 2019. The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. arXiv preprint arXiv:1904.12584. Thien Huu Nguyen and Ralph Grishman. 2015. Combining neural networks and log-linear models to improve relation extraction. arXiv preprint arXiv:1511.05926. Farhad Nooralahzadeh, Lilja Øvrelid, and Jan Tore Lønning. 2018. SIRIUS-LTG-UiO at SemEval2018 task 7: Convolutional neural networks with shortest dependency paths for semantic relation extraction and classification in scientific papers. In Proceedings of The 12th International Workshop on Semantic Evaluation, New Orleans, Louisiana. Association for Computational Linguistics. Lizhen Qu, Yi Zhang, Rui Wang, Lili Jiang, Rainer Gemulla, and Gerhard Weikum. 2014. Senti-lssvm: Sentiment-oriented multi-relation extraction with latent structural svm. Transactions of the Association for Computational Linguistics, 2:155–168. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74–84. Jonathan Rotsztejn, Nora Hollenstein, and Ce Zhang. 2018. ETH-DS3Lab at SemEval-2018 task 7: Effectively combining recurrent and convolutional neural networks for relation classification and extraction. In Proceedings of The 12th International Workshop on Semantic Evaluation, New Orleans, Louisiana. Association for Computational Linguistics. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In NAACL-HLT, page 464468. Ge Shi, Chong Feng, Lifu Huang, Boliang Zhang, Heng Ji, Lejian Liao, and Heyan Huang. 2018. Genre separation network with adversarial training for cross-genre relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1018–1023. 1377 Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, and Daniel Gildea. 2018a. Exploring graph-structured passage representation for multihop reading comprehension with graph neural networks. arXiv preprint arXiv:1809.02040. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018b. N-ary relation extraction using graph-state lstm. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2226–2235. Daniil Sorokin and Iryna Gurevych. 2017. ContextAware Representations for Knowledge Base Relation Extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1784–1789. Association for Computational Linguistics. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pages 455–465. Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, and Andrew McCallum. 2016. Multilingual relation extraction using compositional universal schema. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 886–896. Patrick Verga, Emma Strubell, and Andrew McCallum. 2018. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. In NAACL 2018, pages 872–884. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia, 57. Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level attention cnns. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1298–1307. Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015. Semantic relation classification via convolutional neural networks with simple negative sampling. arXiv preprint arXiv:1506.07650. Kun Xu, Siva Reddy, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2016. Question answering on freebase via relation extraction and textual evidence. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2326–2336. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1321–1331. Mo Yu, Wenpeng Yin, Kazi Saidul Hasan, Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2017. Improved neural relation detection for knowledge base question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 571–581.
2019
132
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1378–1387 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1378 Unsupervised Information Extraction: Regularizing Discriminative Approaches with Relation Distribution Losses Étienne Simon and Vincent Guigue and Benjamin Piwowarski Sorbonne Université, CNRS, Laboratoire d’Informatique de Paris 6 LIP6, F-75005 Paris, France {etienne.simon, vincent.guigue, benjamin.piwowarski}@lip6.fr Abstract Unsupervised relation extraction aims at extracting relations between entities in text. Previous unsupervised approaches are either generative or discriminative. In a supervised setting, discriminative approaches, such as deep neural network classifiers, have demonstrated substantial improvement. However, these models are hard to train without supervision, and the currently proposed solutions are unstable. To overcome this limitation, we introduce a skewness loss which encourages the classifier to predict a relation with confidence given a sentence, and a distribution distance loss enforcing that all relations are predicted in average. These losses improve the performance of discriminative based models, and enable us to train deep neural networks satisfactorily, surpassing current state of the art on three different datasets. 1 Introduction Information extraction models aim at discovering the underlying semantic structure linking entities mentioned in a text. This can be used to build knowledge bases, which are widely used in several applications such as question answering (Yih et al., 2015; Berant et al., 2013), document retrieval (Dalton et al., 2014) and logical reasoning (Socher et al., 2013). In the relation extraction (RE) task, we are interested in discovering the semantic (binary) relation that holds between two entities mentioned in text. The end goal is to extract triplets of the form (subject, relation, object). A considerable amount of work has been conducted on supervised or weakly-supervised relation extraction (Kambhatla, 2004; Zeng et al., 2015; Lin et al., 2016), with recent state-of-the-art models using deep neural networks (NN). Developing unsupervised relation extraction models is interesting for three reasons: they (1) do not necessitate labeled data except for validating the models; (2) can uncover new relation types; and (3) can be trained from large unlabeled datasets, and then fine-tuned for specific relations. The first unsupervised models used a clustering (Hasegawa et al., 2004; Banko et al., 2007) or generative (Yao et al., 2011, 2012) approach. The latter, which obtained state-of-the-art performance, still makes a lot of simplifying hypotheses, such as assuming that the entities are conditionally independent between themselves given the relation. To train more expressive models, a shift to discriminative approaches was necessary. The open question then becomes how to provide a sufficient learning signal to the classifier. To the best of our knowledge, only Marcheggiani and Titov (2016) followed this path by leveraging representation learning for modeling knowledge bases, and proposed to use an auto-encoder model: their encoder extracts the relation from a sentence, that the decoder uses to predict a missing entity. However, their encoder is still limited compared to its supervised counterpart (e.g. Zeng et al. (2015)) and relies on hand-crafted features extracted by natural language processing tools, containing errors and unable to discover new patterns, which might hinder performances. More importantly, our initial experiments showed that the above model was unstable, especially when using a deep NN relation classifier. It converged to either of the two following regimes, depending on hyper-parameter settings: always predicting the same relation, or predicting a uniform distribution. To overcome these limitations, we propose to use two new losses alongside a link prediction loss based on a fill-in-the-blank task, and show experimentally that this is key to learning deep neural network models. Our contributions are the following: • We propose two RelDist losses: a skewness loss, which encourages the classifier to pre1379 dict a class with confidence for a single sentence, and a distribution distance loss, which encourages the classifier to scatter a set of sentences into different classes; • We perform extensive experiments on the usual NYT+FB dataset, as well as two new datasets; • We show that our RelDist losses allow us to train a deep PCNN classifier (Zeng et al., 2015) as well as improve performance of feature-based models (Marcheggiani and Titov, 2016). In the following, we first discuss related works (Section 2) before describing our model (Section 3) and presenting experimental results (Section 4). 2 Related work Relation extraction is a standard language classification task: given a sentence containing two entities, the goal is to predict what is the relation linking these two entities. Most relation extraction systems need to be trained on a labeled dataset. However human annotation is expensive, and virtually impractical when a large number of relations is involved. As a result, most systems are trained on datasets built through distant supervision (Mintz et al., 2009), a compromise between the supervised and unsupervised settings. It makes the following assumption: if a sentence contains two entities linked in a knowledge base, this sentence necessarily conveys that relation. For example, distant supervision aligns the sentence “Hubele1 received the Nobel Prizee2 for his discovery” with the triplet (Hubel, award received, Nobel Prize), thus supervising the sentence with the label “award received”. The resulting alignment are of a poorer quality, and even though this method can leverage large amounts of unlabeled text, the relation ontology is still fixed by a knowledge base, the resulting model being unable to discover new relations. In the supervised setting, neural network models have demonstrated substantial improvement over approaches using hand-crafted features. In particular, piecewise convolutional neural networks (PCNN, Zeng et al., 2015) are now widely used as a basis for other improvements, such as the instance-level selective attention mechanism of Lin et al. (2016) which follows the multiinstance multi-label framework (Hoffmann et al., 2011; Surdeanu et al., 2012). The recent NN approaches however need large amount of data to achieve good performances. In the unsupervised setting, models have no access to annotated sentences or to a knowledge base: other regularity hypotheses have to be made. The resulting models can be categorized into either the generative/clustering or discriminative approaches. The former try to cluster regularities in the text surrounding two entities, while the latter use discriminative models but have to make further hypotheses, namely that a pair of given entities always share the same relation, to provide a learning signal for the classifier. Among clustering models, one of the earliest work is from Hasegawa et al. (2004) who propose building clusters by using cosine similarity on TFIDF vectors for the surrounding text. Later, the OpenIE approaches (Banko et al., 2007; Angeli et al., 2015) relied upon the hypothesis that the surface form of the relation conveyed by a sentence appears in the path between the two entities in its dependency tree. However, these latter works are too dependent on the raw surface form and suffer from bad generalization. In our previous example, OpenIE will extract the triplet (Hubel, received, Nobel Prize), but simply replacing “received” by “was awarded” might produce a different relation even though the semantic remains the same. Related to these clustering approaches, the RelLDA models (Yao et al., 2011, 2012) use a generative model inspired by LDA to cluster sentences: each relation defines a distribution over a highlevel handcrafted set of features describing the relationship between the two entities in the text (e.g. the dependency path). However, these models are limited in their expressiveness. More importantly, depending on the set of features, they might focus on features not related to the relation extraction task. We posit that discriminative approaches can help in going further in expressiveness, especially considering recent results with neural network models. To the best of our knowledge, the only discriminative approach to unsupervised relation extraction is the variational autoencoder approach (VAE) proposed by Marcheggiani and Titov, 2016): the encoder extracts the semantic relation from hand-crafted features of the sentence (related to those of Rel-LDA), while the decoder 1380 The sol was the currency of Peru between 1863 and 1985. prefix infix suffix e1 e2 Figure 1: A sentence from Wikipedia where the conveyed relation is “currency used by”. We call s the sentence with the two entities removed: s = (prefix, infix, suffix). tries to predict one of the two entities given the relation and the other entity, using a general triplet scoring function (Nickel et al., 2011). This scoring function provides a signal since it is known to predict to some extent relation triplets given their embeddings. Among the input features of the classifiers are the entities themselves, the resulting model can thus be interpreted as an autoencoder where the encoder part benefits from an additional context. The proposed loss, based on the KL divergence between the posterior distribution over relations and a uniform prior on the relation distribution, is very unstable in practice. Our proposed approaches solve this unstability, and allows us to train expressive classifiers such as the PCNN model (Zeng et al., 2015). 3 Model description Our model focuses on extracting the relation between two entities in textual data, and assumes that a recognition tool has identified named entities in the text. Furthermore, like most works on relation extraction, we limit ourselves to binary relations and therefore consider sentences with two tagged entities, as shown in Figure 1. To provide a supervision signal to our relation classifier, we follow Marcheggiani and Titov (2016) and use a fill-in-the-blank task, i.e. “The sole1 was the currency of ? e2 between 1863 and 1985.”. To correctly fill in the blank, we could directly learn to predict the missing entity, but in this case we would not be able to learn a relation classifier. Instead, we want to first learn that this sentence expresses the semantic relation “currency used by” before using this information for a supervised task: (i) We suppose that the relation can be predicted by the text surrounding the two entities alone (see Figure 1); (ii) We then try to predict the missing entity given the predicted relation and the other entity – this gives the supervision signal. These hypotheses lead to the following formulation of the fill-in-the-blank task: p(e−i | s, ei) = X r p(r | s) | {z } (i) classifier p(e−i | r, ei) | {z } (ii) link predictor (1) where e1 and e2 are the two entities, s is the text surrounding them and r is the relation linking them. As the link predictor can consider either entity, we use ei to designate the given entity, and e−i = {e1, e2} \ {ei} the one to predict. The relation classifier p(r | s) and link predictor p(e−i | r, ei) are trained jointly to reconstruct a missing entity, but the link predictor cannot access the input sentence directly. Thus, all the required information must be condensed into r, which acts as a bottleneck. We advocate that this information is the semantic relation between the two entities. Note that Marcheggiani and Titov (2016) did not make our first independence hypothesis. Instead, their classifier is conditioned on both ei and e−i, strongly relying on the fact that r is an information bottleneck. In the following, we first describe the relation classifier p(r | s) in section 3.1, before introducing the link predictor p(e−i | r, ei) in section 3.2. Arguing that the resulting model is unstable, we describe the two new RelDist losses in section 3.3. 3.1 Unsupervised Relation Classifier Our model for p(r | s) follows current state-ofthe-art practices for supervised relation extraction by using a piecewise convolutional neural network (PCNN, Zeng et al., 2015). The input sentence can be split into three parts separated by the two entities (see Figure 1). In a PCNN, the model outputs a representation for each part of the sentence. These are then combined to make a prediction. Figure 2 shows the network architecture that we now describe. First, each word of s is mapped to a real-valued vector. In this work, we use standard word embedding, initialized with GloVe1 (Pennington et al., 2014), and fine-tune them during training. Based on those embeddings, a convolutional layer detects 16B.50d from https://nlp.stanford.edu/ projects/glove/ 1381 Founded in Rome ( then capital of the Papal States ) in 1575 by St Philip ... prefix infix suffix Linear softmax p(r | s) Conv max pooling tanh Conv max pooling tanh Conv max pooling tanh Figure 2: Our relation extraction model. Its input is the sentence with the entities removed s = {prefix, infix, suffix}. Each part is run through a convolutional layer to give a fixed-size representation, which are then fed to a softmax layer to make a prediction. patterns in subsequences of words. Then, a maxpooling along the text length combines all features into a fixed-size representation. Note that in our architecture, we obtained better results by using three distinct convolutions, one for each sentence part (i.e. the weights are not shared). We then apply a non-linear function (tanh) and sum the three vectors into a single representation for s. Finally, this representation is fed to a softmax layer to predict the distribution over the relations. This distribution can be plugged into equation (1). Denoting fPCNN our classifier, we have: p(r | s) = fPCNN(r; s, θPCNN) where θPCNN are the parameters of the classifier. Note that we can use the PCNN to predict the relationship for any pair of entities appearing in any sentence, since the input will be different for each pair selected (see Figure 2). 3.2 Link Predictor The purpose of the link predictor is to provide supervision for the relation classifier. As such, it needs to be differentiable. We follow Marcheggiani and Titov (2016) to model p(ei | r, e−i), and use an energy-based formalism, where ψ(e1, r, e2) is the energy associated with (e1, r, e2). The probability is obtained as follows: p(e1 | r, e2) ∝exp(ψ(e1, r, e2)) (2) where ψ is expressed as the sum of two standard relational learning models: ψ(e1, r, e2) = uT e1Arue2 | {z } RESCAL + uT e1Br + uT e2Cr | {z } Selectional Preferences where u ∈R|E|×m is an entity embedding matrix, A ∈R|R|×m×m is a three-way tensor encoding the entities interaction and B, C ∈R|R|×m are two matrices encoding the preferences of each relation of certain entities, and the hyper-parameter m is the dimension of the embedded entities. The function ψ also depends on the energy functions parameters θψ = {A, B, C, u} that we omit for legibility. RESCAL (Nickel et al., 2011) uses a bilinear tensor product to gauge the compatibility of the two entities, whereas in the Selectional Preferences model only the predisposition of an entity to appear as the subject or object of a relation is captured. Negative Sampling The number of entities being very large, the partition function of equation (2) cannot be efficiently computed. To avoid the summation over the set of entities, we follow Marcheggiani and Titov (2016) and use negative sampling (Mikolov et al., 2013): instead of training a softmax classifier, we train a discriminator which tries to recognize real triplets (D = 1) from fake ones (D = 0): p(D = 1 | e1, e2, r) = σ (ψ(e1, r, e2)) where σ(x) = 1/(1 + exp(−x)) is the sigmoid function. This model is then trained by generating negative entities for each position and optimizing 1382 the negative log likelihood: LLP = E (e1,e2,s)∼χ r∼fPCNN(s)  −2 log σ (ψ(e1, r, e2)) − k X j=1 E e′∼E  log σ −ψ(e1, r, e′)  − k X j=1 E e′∼E  log σ −ψ(e′, r, e2)   (3) This loss is defined over the data distribution χ, i.e. the samples (e1, e2, s) follow a uniform distribution over sentences tagged with two entities. The distribution of the relation r for the sentence s is then given by the classifier fPCNN(s), which corresponds to the P r p(r | s) in equation (1). Following standard practice, during training, the expectation on negative entities is approximated by sampling k random entities following the empirical entity distribution E for each position. 3.3 RelDist losses Training the classifier through equation (3) alone is very unstable and dependent on precise hyperparameter tuning. More precisely, according to our early experiments, the training process usually collapses into one of two regimes: (P1) The classifier is very uncertain about which relation is expressed and outputs a relation following a uniform distribution ; (P2) All sentences are classified as conveying the same relation. In both cases, the link predictor can do a good job minimizing LLP by ignoring the output of the classifier, simply exploiting entities co-occurrences. More precisely, many entities only appear in one relationship with a single other entity. In this case, the link predictor can easily ignore the relationship r and predict the missing entity – and there is a pressure for this as the classifier’s output is not yet reliable at the beginning of the optimization process. This instability problem is particularly true since the two components (classifier and link predictor) are strongly interdependent: the classifier cannot be trained without a good link predictor, which itself cannot take r into account without a good classifier resulting in a bootstrap problem. To overcome these pitfalls, we developed two additional losses, that we now describe. Skewness. Firstly, to encourage the classifier to be confident in its output, we minimize the entropy of the predicted relation distribution. This addresses P1 by forcing the classifier toward outputting one-hot vectors for a given sentence using the following loss: LS = E(e1,e2,s)∼χ [H(R | e1, e2, s)] (4) where R is the random variable corresponding to the predicted relation. Following our first independence hypothesis, the entropy of equation (4) is equivalent to H(R | s). Dispersion. Secondly, to ensure that the classifier predicts several relations, we minimize the KL-divergence between the prior p(R) and the uniform distribution U, that is: LD = DKL(p(R) ∥U) (5) Note that contrary to LS, in order to have a good approximation of p(R), the loss LD measures the un-conditionnal distribution over R, i.e. the distribution of predicted relations over all sentences. This addresses P2 by forcing the classifier toward predicting each class equally often over a set of sentences. To satisfactorily and jointly train the link predictor and the classifier, we use the two losses at the same time, resulting in the final loss: L = LLP + αLS + βLD (6) where α and β are both positive hyper-parameters. All three losses are defined over the real data distribution, but in practice they are approximated at the level of a mini-batch. First, both LLP and LS can be computed for each sample independently. To optimize LD however, we need to estimate p(R) at the mini-batch level, and maximize the entropy of the mean predicted relation. Formally, let si for i = 1, . . . , B be the i-th sentence in a batch of size B, we approximate LD as: X r B X i=1 fPCNN(r; si) B ! log B X i=1 fPCNN(r; si) B ! Learning We optimize the empirical estimation of (6), learning the PCNN parameters and word embeddings θPCNN as well as the link predictor parameters and entity embeddings θψ jointly. 1383 Comparison to VAE When computing the loss of the VAE model (Marcheggiani and Titov, 2016), aside from the reconstruction term LLP, the following regularization term is derived: LVAEreg = E(e1,e2,s)∼χ [−H(R | e1, e2, s)] This term results from the KL between p(R | e1, e2, s) and the uniform distribution. Its purpose is to prevent the classifier from always predicting the same relation, i.e. it has the same purpose as our distance loss LD. However its expression is equivalent to −LS, and indeed, minimizing the opposite of our skewness loss increases the entropy of the classifier output, addressing P2. Yet, using LVAEreg = −LS alone, draws the classifier into the other pitfall P1. This causes a drop in performance, as we will show experimentally. 4 Experiments 4.1 Datasets To evaluate our model we use labeled datasets, the labels being used for validation2 and evaluation. The first dataset is the one of Marcheggiani and Titov (2016), which is similar to the one used in Yao et al. (2011). This dataset was built through distant supervision (Mintz et al., 2009) by aligning sentences from the New York Times corpus (NYT, Sandhaus, 2008) with Freebase (FB, Bollacker et al., 2008) triplets. Several sentences were filtered out based on features like the length of the dependency path between the two entities, resulting in 2 million sentences with only 41,000 (2%) of them labeled with one of 262 possible relations. 20% of the labeled sentences were set aside for validation, the remaining 80% are used to compute the final results. We also extracted two datasets from T-REx (Elsahar et al., 2017) which was built as an alignment of Wikipedia with Wikidata (Vrandeˇci´c, 2012). We only consider triplets where both entities appear in the same sentence. If a single sentence contains multiple triplets, it will appear multiple times in the dataset, each time with a different pair of target entities. We built the first dataset DS by extracting all triplets of T-REx where the two entities are linked by a relation in Wikidata. This is the usual distant supervision method. It resulted in 1189 relations and nearly 12 million sentences, all of them labeled with a relation. 2As in other unsupervised RE papers. In Wikidata, each relation is annotated with a list of associated surface forms, for example “shares border with” can be conveyed by “borders”, “adjacent to”, “next to”, etc. The second dataset we built, SPO, only contains the sentences where a surface form of the relation also appears, resulting in 763,000 samples (6% of the unfiltered) and 615 relations. This dataset still contains some misalignment, but should nevertheless be easier for models to extract the correct semantic relation. 4.2 Baseline and Model We compare our model with two state-of-the-art approaches, two generative rel-LDA models of Yao et al. (2011) and the VAE model of Marcheggiani and Titov (2016). The two rel-LDA models only differ by the number of features considered. We use the 8 features listed in Marcheggiani and Titov (2016). Rel-LDA uses the first 3 simplest features defined in their paper, while rel-LDA1 is trained by iteratively adding more features until all 8 are used. To assess our two main contributions individually, we evaluate the PCNN classifier and our additional losses separately. More precisely, we first study the effect of the RelDist losses by looking at the differences between models optimizing LLP −αLS and the ones optimizing LLP + αLS + βLD. Second, we study the effect of the relation classifier by comparing the feature-based classifier and the PCNN trained with the same losses. We thus have four models: March−LS (which corresponds to the model of Marcheggiani and Titov (2016)), March+LS+LD, PCNN−LS and PCNN+LS + LD. All models are trained with 10 relation classes, which, while lower than the number of true relations, allows to compare faithfully the models since the distribution of gold relations is very unbalanced. For feature-based models, the size of the features domain range from 1 to 10 million values depending on the dataset. We train our models with Adam using L2 regularization on all parameters. To have a good estimation of p(R) in the computation of LD, we use a batch size of 100. Words embeddings are of size 50, entities embeddings of size m = 10. We sample k = 5 negative samples to estimate LLP. Lastly, we set α = 0.01 and β = 0.02. All three datasets come with a validation set, and following Marcheggiani and Titov (2016), we used it for cross-validation to optimize 1384 Dataset Model B3 V-measure ARI Classifier Reg. F1 Prec. Rec. F1 Hom. Comp. NYT+FB rel-LDA 29.1 24.8 35.2 30.0 26.1 35.1 13.3 rel-LDA1 36.9 30.4 47.0 37.4 31.9 45.1 24.2 March. −LS 35.2 23.8 67.1 27.0 18.6 49.6 18.7 PCNN −LS 27.6 24.3 31.9 24.7 21.2 29.6 15.7 March. LS + LD 37.5 31.1 47.4 38.7 32.6 47.8 27.6 PCNN LS + LD 39.4 32.2 50.7 38.3 32.2 47.2 33.8 T-REx SPO rel-LDA 11.9 10.2 14.1 5.9 4.9 7.4 3.9 rel-LDA1 18.5 14.3 26.1 19.4 16.1 24.5 8.6 March. −LS 24.8 20.6 31.3 23.6 19.1 30.6 12.6 PCNN −LS 25.3 19.2 37.0 23.1 18.1 31.9 10.8 March. LS + LD 29.5 22.7 42.0 34.8 28.4 45.1 20.3 PCNN LS + LD 36.3 28.4 50.3 41.4 33.7 53.6 21.3 T-REx DS rel-LDA 9.7 6.8 17.0 8.3 6.6 11.4 2.2 rel-LDA1 12.7 8.3 26.6 17.0 13.3 23.5 3.4 March. −LS 9.0 6.4 15.5 5.7 4.5 7.9 1.9 PCNN −LS 12.2 8.6 21.1 12.9 10.1 18.0 2.9 March. LS + LD 19.5 13.3 36.7 30.6 24.1 42.1 11.5 PCNN LS + LD 19.7 14.0 33.4 26.6 20.8 36.8 9.4 Table 1: Results (percentage) on our three datasets. The rel-LDA and rel-LDA1 models come from Yao et al. (2011). The model of Marcheggiani and Titov (2016) is March −LS. the B3F1 (described below). 4.3 Evaluation metrics We used the B3 metric used in Yao et al. (2011) and Marcheggiani and Titov (2016), and complemented it with two more metrics commonly seen in clustering task evaluation: V-measure (Rosenberg and Hirschberg, 2007) and ARI (Hubert and Arabie, 1985), allowing us to capture the characteristics of each approach more in detail. To clearly describe the different metrics, we propose a common probabilistic formulation of those (in practice, they are estimated on the validation and test sets), and use the following notations. Let X (or Y ) be a random variable corresponding to a sentence. We denote c(X) the predicted cluster of X and g(X) its conveyed gold relation. B-cubed. The first metric we compute is a generalization of F1 for clustering tasks called B3 (Bagga and Baldwin, 1998). The B3 precision and recall are defined as follows: B3 Precision = E X,Y P (g(X) = g(Y ) | c(X) = c(Y )) B3 Recall = E X,Y P (c(X) = c(Y ) | g(X) = g(Y )) As precision and recall can be trivially maximized by putting each sample in its own cluster or by clustering all samples into a single class, the main metric B3 F1 is defined as the harmonic mean of precision and recall. V-measure. We also consider an entropy-based metric (Rosenberg and Hirschberg, 2007); this metric is defined by the homogeneity and completeness, which are akin to B3 precision and recall, but rely on conditional entropy: Homogeneity = 1 −H (c(X) | g(X)) /H (c(X)) Completeness = 1 −H (g(X) | c(X)) /H (g(X)) As B3, the V-measure is summarized by the F1 value. Compared to B3, the V-measure penalizes small impurities in a relatively “pure” cluster more harshly than in less pure ones. Symmetrically, it penalizes more a degradation of a well clustered relation than of a less well clustered one. Adjusted Rand Index. Finally, the Rand Index is defined as the probability that cluster and gold assignments are compatible: RI = E X,Y [P (c(X) = c(Y ) ⇔g(X) = g(Y ))] The Adjusted Rand Index (ARI, Hubert and Arabie, 1985) is a normalization of the Rand Index such that a random assignment has an ARI of 0, and the maximum is 1. Compared to the previous metrics, ARI will be less sensitive to a discrepancy between precision/homogeneity and recall/completeness since it is not an harmonic mean of both. 1385 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 e1 located in e2 (16.4%) e1 instance of e2 (15.0%) e1 in country e2 (9.6%) e2 instance of e1 (7.4%) e1 shares border e2 (4.5%) e2 shares border e1 (4.5%) e2 located in e1 (4.4%) e2 in country e1 (3.6%) e1 cast member of e2 (2.7%) e1 capital of e2 (1.6%) e1 director of e2 (1.4%) e1 has child e2 (1.2%) e2 has child e1 (1.0%) e1 member of e2 (0.9%) e2 capital of e1 (0.9%) rel-LDA1 March. −LS March. + LS + LD PCNN + LS + LD Figure 3: Normalized contingency tables for the TREx SPO dataset. Each of the 10 columns corresponds to a predicted relation cluster, which were sorted to ease comparison. The rows identify Wikidata relations sorted by frequency in the TREx SPO corpus. The area of each square is proportional to the number of sentences in the cell. The matrix was normalized so that each row sum to 1, thus it is more akin to a B3 per-item recall than a true contingency table. 4.4 Results The results reported in Table 1 are the average test scores of three runs on the NYT+FB and T-REx SPO datasets, using different random initialization of the parameters – in practice the variance was low enough so that reported results can be analyzed. We observe that regardless of the model and metrics, the highest measures are obtained on T-REx SPO, then NYT+FB and finally T-REx DS. This was to be expected, since T-REx SPO was built to be easy, and hard-to-process sentences were filtered out of NYT+FB (Yao et al., 2011; Marcheggiani and Titov, 2016). We also observe that main metrics agree in general (B3, V-measure and ARI) in most cases. Performing a PCA on the measures, we observed that V-measure forms a nearly-orthogonal axis to B3, and to lesser extent ARI. Hence we can focus on B3 and V-measure in our analysis. We first measure the benefit of our RelDist losses: on all datasets and metrics, the two models using +LS + LD are systematically better than the ones using −LS alone: (1) The PCNN models consistently gain between 7 and 11 points in B3 F1 from these additional losses; (2) The featurebased classifier benefits from the RelDist losses to a lesser extent, except on the T-REx DS dataset on which the March−LS model without the RelDist losses completely collapses – we hypothesize that this dataset is too hard for the model given the number of parameters to estimate. We now restrict to discriminative models based on +LS + LD. We note that both (March/PCNN) exhibit better performances than generative ones (Rel-LDA, Rel-LDA1) with a difference ranging from 2.5/0.6 (NYT, for March/PCNN) to 11/17.8 (on SPO). However, the advantage of PCNN over feature-based classifier is not completely clear. While the PCNN version has a systematically better B3 F1 on all datasets (∆of 0.2/1.9/6.8 respectively for DS/NYT/SPO), the V-measure decreases by 0.4/4.0 on respectively NYT/DS, and ARI by 2.1 on DS. As B3 F1 was used for validation, this shows that the PCNN models overfit this metric by polluting relatively clean clusters with unrelated sentences or degrades well clustered gold relations by splitting them within two clusters. 4.5 Qualitative Analysis Since all the metrics agree on the SPO dataset, we plot the contingency tables of our models in Figure 3. Each row is labeled with the gold Wikidata relation extracted through distant supervision. Since relations are generally not symmetric, each Wikidata relation appears twice in the table, once for each disposition of the entities in the sentence. This is particularly problematic with symmetric relations like “shares border” which are two different gold relations that actually convey the same semantic. To interpret Figure 3, we have to see whether a predicted cluster (column) contains different gold relations – paying attention to the fact that the most important gold relations are listed in the top rows (the top 5 relations account for 50% of sentences). The first thing to notice is that the contingency tables of both models using our RelDist losses are sparser (for each columnn), which means that our models better separate relations from each other. We observe that March−LS is 1386 affected by the pitfall P1 (uniform distribution) for many gold clusters. The −LS loss forces the classifier to be uncertain about which relation is expressed, translating into a dense contingency table and resulting in poor performances. The RelLDA1 model is even worse, and fails to identify clear clusters, showing the limitations of a purely generative approach that might focus on clusters not linked with any relation. Focusing on our proposed model, PCNN+LS + LD (rightmost figure), we looked at two different mistakes. The first is a gold cluster divided in two (low recall). When looking at clusters 0 and 1, we did not find any recognizable pattern. Moreover, the corresponding link predictor parameters are very similar. This seems to be a limitation of the distance loss: splitting a large cluster in two may improve LD but worsen all the evaluation metrics. The model is then penalized by the fact that it lost one slot to transmit information between the classifier and the link predictor. The second type of mistake is when a predicted cluster corresponds to two gold ones (low precision). Here, most of the mistakes seem understandable: "shares border" is symmetric (cluster 7), “located in” and “in country” (cluster 8) or “cast member” and “director of” (cluster 9) are clearly related. 5 Conclusion In this paper, we show that discriminative relation extraction models can be trained efficiently on unlabeled datasets. Unsupervised relation extraction models tends to produce impure clusters by enforcing a uniformity constrain at the level of a single sample. We proposed two losses (named RelDist) to effectively train expressive relation extraction models by enforcing the distribution over relations to be uniform – note that other target distributions could be used. In particular, we were able to successfully train a deep neural network classifier that only performed well in a supervised setting so far. We demonstrated the effectiveness of our RelDist losses on three datasets and showcased its effect on cluster purity. Future work will investigate more complex and recent neural network models such as Devlin et al. (2018), as well as alternative losses. In particular, while forcing an uniform distribution with the distance loss LD might be meaningful with a low number of predicted clusters, it might not generalize to larger number of relations. Preliminary experiments seem to indicate that this can be addressed by replacing the uniform distribution in equation 5 with the empirical distribution of the relations in the validation set, or any other appropriate law if no validation set is available. Acknowledgments We are grateful to Diego Marcheggiani for for sharing his dataset with us. Furthermore, we would like to thank Alexandre Allauzen, Xavier Tannier as well as the anonymous ACL reviewers for their valuable remarks. This work was lead with the support of the FUI-BInD Project. References Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 344–354. Amit Bagga and Breck Baldwin. 1998. Entitybased cross-document coreferencing using the vector space model. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics-Volume 1, pages 79–85. Association for Computational Linguistics. Michele Banko, Michael J Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In IJCAI, volume 7, pages 2670–2676. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In EMNLP. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250. AcM. Jeffrey Dalton, Laura Dietz, and James Allan. 2014. Entity query feature expansion using knowledge base links. In Proceedings of the 37th International ACM SIGIR Conference on Research &#38; Development in Information Retrieval, SIGIR ’14, pages 365–374, New York, NY, USA. ACM. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. 1387 Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Elena Simperl, and Frederique Laforest. 2017. T-rex: A large scale alignment of natural language with knowledge base triples. Proceedings of the 11th International Conference on Language Resources and Evaluation. Takaaki Hasegawa, Satoshi Sekine, and Ralph Grishman. 2004. Discovering relations among named entities from large corpora. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 415. Association for Computational Linguistics. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 541–550. Association for Computational Linguistics. Lawrence Hubert and Phipps Arabie. 1985. Comparing partitions. Journal of classification, 2(1):193– 218. Nanda Kambhatla. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations. In Proceedings of the ACL 2004 on Interactive poster and demonstration sessions, page 22. Association for Computational Linguistics. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2124–2133, Berlin, Germany. Association for Computational Linguistics. Diego Marcheggiani and Ivan Titov. 2016. Discretestate variational autoencoders for joint discovery and factorization of relations. Transactions of the Association for Computational Linguistics, 4:231–244. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003–1011. Association for Computational Linguistics. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In ICML, volume 11, pages 809–816. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Andrew Rosenberg and Julia Hirschberg. 2007. Vmeasure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL). Evan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in neural information processing systems, pages 926–934. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pages 455–465. Association for Computational Linguistics. Denny Vrandeˇci´c. 2012. Wikidata: A new platform for collaborative data collection. In Proceedings of the 21st international conference on world wide web, pages 1063–1064. ACM. Limin Yao, Aria Haghighi, Sebastian Riedel, and Andrew McCallum. 2011. Structured relation discovery using generative models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1456–1466. Association for Computational Linguistics. Limin Yao, Sebastian Riedel, and Andrew McCallum. 2012. Unsupervised relation discovery with sense disambiguation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 712–720. Association for Computational Linguistics. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1321–1331. Association for Computational Linguistics. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Emnlp, pages 1753–1762.
2019
133
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1388–1398 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1388 Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction Christoph Alt Marc H¨ubner Leonhard Hennig German Research Center for Artificial Intelligence (DFKI) Speech and Language Technology Lab {christoph.alt, marc.huebner, leonhard.hennig}@dfki.de Abstract Distantly supervised relation extraction is widely used to extract relational facts from text, but suffers from noisy labels. Current relation extraction methods try to alleviate the noise by multi-instance learning and by providing supporting linguistic and contextual information to more efficiently guide the relation classification. While achieving state-of-the-art results, we observed these models to be biased towards recognizing a limited set of relations with high precision, while ignoring those in the long tail. To address this gap, we utilize a pre-trained language model, the OpenAI Generative Pre-trained Transformer (GPT) (Radford et al., 2018). The GPT and similar models have been shown to capture semantic and syntactic features, and also a notable amount of “common-sense” knowledge, which we hypothesize are important features for recognizing a more diverse set of relations. By extending the GPT to the distantly supervised setting, and fine-tuning it on the NYT10 dataset, we show that it predicts a larger set of distinct relation types with high confidence. Manual and automated evaluation of our model shows that it achieves a state-of-the-art AUC score of 0.422 on the NYT10 dataset, and performs especially well at higher recall levels. 1 Introduction Relation extraction (RE), defined as the task of identifying the relationship between concepts mentioned in text, is a key component of many natural language processing applications, such as knowledge base population (Ji and Grishman, 2011) and question answering (Yu et al., 2017). Distant supervision (Mintz et al., 2009; Hoffmann et al., 2011) is a popular approach to heuristically generate labeled data for training RE systems by aligning entity tuples in text with known relation instances from a knowledge base, but suffers from Figure 1: Distant supervision generates noisily labeled relation mentions by aligning entity tuples in a text corpus with relation instances from a knowledge base. noisy labels and incomplete knowledge base information (Min et al., 2013; Fan et al., 2014). Figure 1 shows an example of three sentences labeled with an existing KB relation, two of which are false positives and do not actually express the relation. Current state-of-the-art RE methods try to address these challenges by applying multi-instance learning methods (Mintz et al., 2009; Surdeanu et al., 2012; Lin et al., 2016) and guiding the model by explicitly provided semantic and syntactic knowledge, e.g. part-of-speech tags (Zeng et al., 2014) and dependency parse information (Surdeanu et al., 2012; Zhang et al., 2018b). Recent methods also utilize side information, e.g. paraphrases, relation aliases, and entity types (Vashishth et al., 2018). However, we observe that these models are often biased towards recognizing a limited set of relations with high precision, while ignoring those in the long tail (see Section 5.2). Deep language representations, e.g. those learned by the Transformer (Vaswani et al., 2017) via language modeling (Radford et al., 2018), have been shown to implicitly capture useful semantic and syntactic properties of text solely by 1389 unsupervised pre-training (Peters et al., 2018), as demonstrated by state-of-the-art performance on a wide range of natural language processing tasks (Vaswani et al., 2017; Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018), including supervised relation extraction (Alt et al., 2019). Radford et al. (2019) even found language models to perform fairly well on answering open-domain questions without being trained on the actual task, suggesting they capture a limited amount of “common-sense” knowledge. We hypothesize that pre-trained language models provide a stronger signal for distant supervision, better guiding relation extraction based on the knowledge acquired during unsupervised pre-training. Replacing explicit linguistic and side-information with implicit features improves domain and language independence and could increase the diversity of the recognized relations. In this paper, we introduce a Distantly Supervised Transformer for Relation Extraction (DISTRE). We extend the standard Transformer architecture by a selective attention mechanism to handle multi-instance learning and prediction, which allows us to fine-tune the pre-trained Transformer language model directly on the distantly supervised RE task. This minimizes explicit feature extraction and reduces the risk of error accumulation. In addition, the self-attentive architecture allows the model to efficiently capture longrange dependencies and the language model to utilize knowledge about the relation between entities and concepts acquired during unsupervised pre-training. Our model achieves a state-of-the-art AUC score of 0.422 on the NYT10 dataset, and performs especially well at higher recall levels, when compared to competitive baseline models. We selected the GPT as our language model because of its fine-tuning efficiency and reasonable hardware requirements, compared to e.g. LSTMbased language models (Ruder and Howard, 2018; Peters et al., 2018) or BERT (Devlin et al., 2018). The contributions of this paper can be summarized as follows: • We extend the GPT to handle bag-level, multi-instance training and prediction for distantly supervised datasets, by aggregating sentence-level information with selective attention to produce bag-level predictions (§ 3). • We evaluate our fine-tuned language model on the NYT10 dataset and show that it achieves a state-of-the-art AUC compared to RESIDE (Vashishth et al., 2018) and PCNN+ATT (Lin et al., 2016) in held-out evaluation (§ 4, § 5.1). • We follow up on these results with a manual evaluation of ranked predictions, demonstrating that our model predicts a more diverse set of relations and performs especially well at higher recall levels (§ 5.2). • We make our code publicly available at https://github.com/DFKI-NLP/ DISTRE. 2 Transformer Language Model This section reviews the Transformer language model as introduced by Radford et al. (2018). We first define the Transformer-Decoder (Section 2.1), followed by an introduction on how contextualized representations are learned with a language modeling objective (Section 2.2). 2.1 Transformer-Decoder The Transformer-Decoder (Liu et al., 2018a), shown in Figure 2, is a decoder-only variant of the original Transformer (Vaswani et al., 2017). Like the original Transformer, the model repeatedly encodes the given input representations over multiple layers (i.e., Transformer blocks), consisting of masked multi-head self-attention followed by a position-wise feedforward operation. In contrast to the original decoder blocks this version contains no form of unmasked self-attention since there are no encoder blocks. This is formalized as follows: h0 = TWe + Wp hl = tf block(hl−1) ∀l ∈[1, L] (1) Where T is a matrix of one-hot row vectors of the token indices in the sentence, We is the token embedding matrix, Wp is the positional embedding matrix, L is the number of Transformer blocks, and hl is the state at layer l. Since the Transformer has no implicit notion of token positions, the first layer adds a learned positional embedding ep ∈Rd to each token embedding ep t ∈Rd at position p in the input sequence. The self-attentive architecture allows an output state hp l of a block to be informed by all input states hl−1, which is key to efficiently model long-range dependencies. For language modeling, however, self-attention must be constrained (masked) not to attend to positions 1390 Next Token Prediction Relation Classifier Layer-Norm Feed Forward Input Embeddings (h0) Layer-Norm Masked Multi Self-Attention L x Transformer Block ... Selective Attention sn si s1 Sentence Represent. Bag ... 0.1 0.3 0.2 ... ... α Figure 2: Transformer-Block architecture and training objectives. A Transformer-Block is applied at each of the L layers to produce states h1 to hL. After encoding each sentence in a bag into its representation si, selective attention informs the relation classifier with a representation aggregated over all sentences [s1, . . . , sn]. ahead of the current token. For a more exhaustive description of the architecture, we refer readers to Vaswani et al. (2017) and the excellent guide “The Annotated Transformer”.1 2.2 Unsupervised Pre-training of Language Representations Given a corpus C = {c1, . . . , cn} of tokens ci, the language modeling objective maximizes the likelihood L1(C) = X i log P(ci|ci−1, . . . , ci−k; θ), (2) where k is the context window considered for predicting the next token ci via the conditional probability P. The distribution over the target tokens is modeled using the previously defined Transformer model as follows: P(c) = softmax(hLW T e ), (3) where hL is the sequence of states after the final layer L, We is the embedding matrix, and θ are the model parameters that are optimized by stochastic gradient descent. This results in a probability distribution for each token in the input sequence. 3 Multi-Instance Learning with the Transformer This section introduces our extension to the original transformer architecture, enabling bag-level 1http://nlp.seas.harvard.edu/2018/04/ 03/attention.html multi-instance learning on distantly supervised datasets (Section 3.1), followed by a description of our task-specific input representation for relation extraction (Section 3.2). 3.1 Distantly Supervised Fine-tuning on Relation Extraction After pre-training with the objective in Eq. 2, the language model is fine-tuned on the relation extraction task. We assume a labeled dataset D = {(xi, headi, taili, ri)}N i=1, where each example consists of an input sequence of tokens xi = [x1, . . . , xm], the positions headi and taili of the relation’s head and tail entity in the sequence of tokens, and the corresponding relation label ri, assigned by distant supervision. Due to its noisy annotation, label ri is an unreliable target for training. Instead, the relation classification is applied on a bag level, representing each entity pair (head, tail) as a set S = {x1, . . . , xn} consisting of all sentences that contain the entity pair. A set representation s is then derived as a weighted sum over the individual sentence representations: s = X i αisi, (4) where αi is the weight assigned to the corresponding sentence representation si. A sentence representation is obtained by feeding the token sequence xi of a sentence to the pre-trained model and using the last state hm L of the final state representation hL as its representation si. The set representation s is then used to inform the relation classifier. We use selective attention (Lin et al., 2016), shown in Figure 2, as our approach for aggregating a bag-level representation s based on the individual sentence representations si. Compared to average selection, where each sentence representation contributes equally to the bag-level representation, selective attention learns to identify the sentences with features most clearly expressing a relation, while de-emphasizing those that contain noise. The weight αi is obtained for each sentence by comparing its representation against a learned relation representation r: αi = exp(sir) Pn j=1 exp(sjr) (5) To compute the output distribution P(l) over relation labels, a linear layer followed by a softmax 1391 is applied to s: P(l|S, θ) = softmax(Wrs + b), (6) where Wr is the representation matrix of relations r and b ∈Rdr is a bias vector. During fine-tuning we want to optimize the following objective: L2(D) = |S| X i=1 log P(li|Si, θ) (7) According to Radford et al. (2018), introducing language modeling as an auxiliary objective during fine-tuning improves generalization and leads to faster convergence. Therefore, our final objective combines Eq. 2 and Eq. 7: L(D) = λ ∗L1(D) + L2(D), (8) where the scalar value λ is the weight of the language model objective during fine-tuning. 3.2 Input Representation Our input representation (see Figure 3) encodes each sentence as a sequence of tokens. To make use of sub-word information, we tokenize the input text using byte pair encoding (BPE) (Sennrich et al., 2016). The BPE algorithm creates a vocabulary of sub-word tokens, starting with single characters. Then, the algorithm iteratively merges the most frequently co-occurring tokens into a new token until a predefined vocabulary size is reached. For each token, we obtain its input representation by summing over the corresponding token embedding and positional embedding. While the model is pre-trained on plain text sentences, relation extraction requires a structured input, namely a sentence and relation arguments. To avoid task-specific changes to the architecture, we adopt a traversal-style approach similar to Radford et al. (2018). The structured, task-specific input is converted to an ordered sequence to be directly fed to the model without architectural changes. Figure 3 provides a visual illustration of the input format. It starts with the tokens of the head and tail entity, separated by delimiters, followed by the token sequence of the sentence containing the entity pair, and ends with a special classification token. The classification token signals the model to generate a sentence representation for relation classification. Since our model processes the input left-to-right, we add the relation arguments to the beginning, to bias the attention mechanism towards their token representation while processing the sentence’s token sequence. est e5 eche e4 e[sep] e3 ekey e2 e[strt] e1 + h0 h1 e[sep] e6 byte pair emb. positional emb. + + + + + ... ... [strt] key [sep] chest [sep] The key ... chest [clf] Figure 3: Relation extraction requires a structured input for fine-tuning, with special delimiters to assign different meanings to parts of the input. The input embedding h0 is created by summing over the positional embedding and the byte pair embedding for each token. States hl are obtained by self-attending over the states of the previous layer hl−1. 4 Experiment Setup In the following section we describe our experimental setup. We run our experiments on the distantly supervised NYT10 dataset and use PCNN+ATTN (Lin et al., 2016) and RESIDE (Vashishth et al., 2018) as the state-of-theart baselines. The piecewise convolutional neural network (PCNN) segments each input sentence into parts to the left, middle, and right of the entity pair, followed by convolutional encoding and selective attention to inform the relation classifier with a baglevel representation. RESIDE, on the other hand, uses a bidirectional gated recurrent unit (GRU) to encode the input sentence, followed by a graph convolutional neural network (GCN) to encode the explicitly provided dependency parse tree information. This is then combined with named entity type information to obtain a sentence representation that can be aggregated via selective attention and forwarded to the relation classifier. 4.1 NYT10 Dataset The NYT10 dataset by Riedel et al. (2010) is a standard benchmark for distantly supervised relation extraction. It was generated by aligning Freebase relations with the New York Times corpus, with the years 2005–2006 reserved for training and 2007 for testing. We use the version of the dataset pre-processed by Lin et al. (2016), which 1392 is openly accessible online.2 The training data contains 522,611 sentences, 281,270 entity pairs and 18,252 relational facts. The test data contains 172,448 sentences, 96,678 entity pairs and 1,950 relational facts. There are 53 relation types, including NA if no relation holds for a given sentence and entity pair. Per convention we report Precision@N (precision scores for the top 100, top 200, and top 300 extracted relation instances) and a plot of the precision-recall curves. Since the test data is also generated via distant supervision, and can only provide an approximate measure of the performance, we also report P@100, P@200, and P@300 based on a manual evaluation. 4.2 Pre-training Since pre-training is computationally expensive, and our main goal is to show its effectiveness by fine-tuning on the distantly supervised relation extraction task, we reuse the language model3 published by Radford et al. (2018) for our experiments. The model was trained on the BooksCorpus (Zhu et al., 2015), which contains around 7,000 unpublished books with a total of more than 800M words of different genres. The model consists of L = 12 decoder blocks with 12 attention heads and 768 dimensional states, and a feedforward layer of 3072 dimensional states. We reuse the byte-pair encoding vocabulary of this model, but extend it with task-specific tokens (e.g., start, end, delimiter). 4.3 Hyperparameters During our experiments we found the hyperparameters for fine-tuning, reported in Radford et al. (2018), to be very effective. We used the Adam optimization scheme (Kingma and Ba, 2015) with β1 = 0.9, β2 = 0.999, a batch size of 8, a learning rate of 6.25e-5, and a linear learning rate decay schedule with warm-up over 0.2% of training updates. We trained the model for 3 epochs and applied residual and attention dropout with a rate of 0.1, and classifier dropout with a rate of 0.2. 5 Results This section presents our experimental results. We compare DISTRE to other works on the NYT10 dataset, and show that it recognizes a more diverse 2https://drive.google.com/file/d/ 1eSGYObt-SRLccvYCsWaHx1ldurp9eDN_ 3https://github.com/openai/ finetune-transformer-lm 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Recall 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Precision Precision-Recall DISTRE | AUC: 0.422 RESIDE | AUC: 0.415 PCNN+ATT | AUC: 0.342 Mintz | AUC: 0.106 Figure 4: Precision-Recall curve on the NYT dataset. Our method (DISTRE) shows a more balanced performance across relations, especially in the long tail. † marks results reported by Vashishth et al. (2018). ‡ indicates results we obtained with the OpenNRE4 implementation. set of relations, while still achieving state-of-theart AUC. Even without explicitly provided side information and linguistic features. 5.1 Held-out Evaluation Table 1 shows the results of our model on the held-out dataset. DISTRE with selective attention achieves a new state-of-the-art AUC value of 0.422. The precision-recall curve in Figure 4 shows that it outperforms RESIDE and PCNN+ATT at higher recall levels, while precision is lower for top predicted relation instances. The results of the PCNN+ATT model indicate that its performance is only better in the very beginning of the curve, but its precision drops early and only achieves an AUC value of 0.341. Similar, RESIDE performs better in the beginning but drops in precision after a recall-level of approximately 0.25. This suggests that our method yields a more balanced overall performance, which we believe is important in many real-world applications. Table 1 also shows detailed precision values measured at different points along the P-R curve. We again can observe that while DISTRE has lower precision for the top 500 predicted relation instances, it shows a state-of-the-art precision of 60.2% for the top 1000 and continues to perform higher for the remaining, much larger part of the predictions. 4https://github.com/thunlp/OpenNRE 1393 System AUC P@100 P@200 P@300 P@500 P@1000 P@2000 Mintz† 0.107 52.3 50.2 45.0 39.7 33.6 23.4 PCNN+ATT‡ 0.341 73.0 68.0 67.3 63.6 53.3 40.0 RESIDE† 0.415 81.8 75.4 74.3 69.7 59.3 45.0 DISTRE 0.422 68.0 67.0 65.3 65.0 60.2 47.9 Table 1: Precision evaluated automatically for the top rated relation instances. † marks results reported in the original paper. ‡ marks our results using the OpenNRE implementation. 5.2 Manual Evaluation and Analysis Since automated evaluation on a distantly supervised, held-out dataset does not reflect the actual performance of the models given false positive labels and incomplete knowledge base information, we also evaluate all models manually. This also allows us to gain a better understanding of the difference of the models in terms of their predictions. To this end, three human annotators manually rated the top 300 predicted relation instances for each model. Annotators were asked to label a predicted relation as correct only if it expressed a true fact at some point in time (e.g., for a /business/person/company relationship, a person may have worked for a company in the past, but not currently), and if at least one sentence clearly expressed this relation, either via a syntactic pattern or via an indicator phrase. Table 2 shows the P@100, P@200, P@300 and average precision scores, averaged over all annotators. PCNN+ATT has the highest average precision at 94.3%, 3% higher than the 91.2% of RESIDE and 5% higher than our model. However, we see that this is mainly due to PCNN+ATT’s very high P@100 and P@200 scores. For P@300, all models have very similar precision scores. PCNN+ATT’s scores decrease considerably, reflecting the overall trend of its PR curve, whereas RESIDE’s and DISTRE’s manual precision scores remain at approximately the same level. Our model’s precision scores for the top rated predictions are around 2% lower than those of RESIDE, confirming the results of the held-out evaluation. Manual inspection of DISTRE’s output shows that most errors among the top predictions arise from wrongly labeled /location/country/capital instances, which the other models do not predict among the top 300 relations. Table 3 shows the distribution over relation types for the top 300 predictions of the different models. We see that DISTRE’s top predictions encompass 10 distinct relation types, more than the other two models, with /location/location/contains and /people/person/nationality contributing 67% of the predictions. Compared to PCNN+ATT and RESIDE, DISTRE predicts additional relation types, such as e.g. /people/person/place lived (e.g., ”Sen. PER, Republican/Democrat of LOC”) and /location/neighborhood/neighborhood of (e.g., ”the LOC neighborhood/area of LOC”), with high confidence. RESIDE’s top 300 predictions cover a smaller range of 7 distinct relation types, but also focus on /location/location/contains and /people/person/nationality (82% of the model’s predictions). RESIDE’s top predictions include e.g. the additional relation types /business/company/founders (e.g., ”PER, the founder of ORG”) and /people/person/children (e.g., ”PER, the daughter/son of PER”). PCNN+ATT’s high-confidence predictions are strongly biased towards a very small set of only four relation types. Of these, /location/location/contains and /people/person/nationality together make up 91% of the top 300 predictions. Manual inspection shows that for these relations, the PCNN+ATT model picks up on entity type signals and basic syntactic patterns, such as ”LOC, LOC” (e.g., ”Berlin, Germany”) and ”LOC in LOC” (”Green Mountain College in Vermont”) for /location/location/contains, and ”PER of LOC” (”Stephen Harper of Canada”) for /people/person/nationality. This suggests that the PCNN model ranks short and simple patterns higher than more complex patterns where the distance between the arguments is larger. The two other models, RESIDE and DISTRE, also identify and utilize these syntactic patterns. Table 4 lists some of the more challenging sentence-level predictions that our system cor1394 System P@100 P@200 P@300 Avg Prec PCNN+ATT 97.3 94.7 90.8 94.3 RESIDE 91.3 91.2 91.0 91.2 DISTRE 88.0 89.8 89.2 89.0 Table 2: Precision evaluated manually for the top 300 relation instances, averaged across 3 human annotators. relation DIS RES PCNN location/contains 168 182 214 person/nationality 32 65 59 person/company 31 26 19 person/place lived 22 – – country/capital 17 – – admin div/country 13 12 6 neighborhood/nbhd of 10 3 2 location/team 3 – – company/founders 2 6 – team/location 2 – – person/children – 6 – Table 3: Distribution over the top 300 predicted relations for each method. DISTRE achieves performance comparable to RESIDE, while predicting a more diverse set of relations with high confidence. PCNN+ATT shows a strong focus on two relations: /location/location/contains and /people/person/nationality. rectly classified. 6 Related Work Relation Extraction Initial work in RE uses statistical classifiers or kernel based methods in combination with discrete syntactic features, such as part-of-speech and named entities tags, morphological features, and WordNet hypernyms (Mintz et al., 2009; Hendrickx et al., 2010). These methods have been superseded by sequence based methods, including recurrent (Socher et al., 2012; Zhang and Wang, 2015) and convolutional neural networks (Zeng et al., 2014, 2015). Consequently, discrete features have been replaced by distributed representations of words and syntactic features (Turian et al., 2010; Pennington et al., 2014). Xu et al. (2015a,b) integrated shortest dependency path (SDP) information into a LSTMbased relation classification model. Considering the SDP is useful for relation classification, because it focuses on the action and agents in a sentence (Bunescu and Mooney, 2005; Socher et al., 2014). Zhang et al. (2018b) established a new state-of-the-art for relation extraction on the TACRED dataset by applying a combination of pruning and graph convolutions to the dependency tree. Recently, Verga et al. (2018) extended the Transformer architecture by a custom architecture for supervised biomedical named entity and relation extraction. In comparison, we fine-tune pretrained language representations and only require distantly supervised annotation labels. Distantly Supervised Relation Extraction Early distantly supervised approaches (Mintz et al., 2009) use multi-instance learning (Riedel et al., 2010) and multi-instance multi-label learning (Surdeanu et al., 2012; Hoffmann et al., 2011) to model the assumption that at least one sentence per relation instance correctly expresses the relation. With the increasing popularity of neural networks, PCNN (Zeng et al., 2014) became the most widely used architecture, with extensions for multi-instance learning (Zeng et al., 2015), selective attention (Lin et al., 2016; Han et al., 2018), adversarial training (Wu et al., 2017; Qin et al., 2018), noise models (Luo et al., 2017), and soft labeling (Liu et al., 2017; Wang et al., 2018). Recent work showed graph convolutions (Vashishth et al., 2018) and capsule networks (Zhang et al., 2018a), previously applied to the supervised setting (Zhang et al., 2018b), to be also applicable in a distantly supervised setting. In addition, linguistic and semantic background knowledge is helpful for the task, but the proposed systems typically rely on explicit features, such as dependency trees, named entity types, and relation aliases (Vashishth et al., 2018; Yaghoobzadeh et al., 2017), or task- and domain-specific pre-training (Liu et al., 2018b; He et al., 2018), whereas our method only relies on features captured by a language model during unsupervised pre-training. Language Representations and Transfer Learning Deep language representations have shown to be an effective form of unsupervised 1395 Sentence Relation Mr. Snow asked, referring to Ayatollah Ali Khamenei, Iran’s supreme leader, and Mahmoud Ahmadinejad, Iran’s president. /people/person/nationality In Oklahoma, the Democratic governor, Brad Henry, vetoed legislation Wednesday that would ban state facilities and workers from performing abortions except to save the life of the pregnant woman. /people/person/place lived Jakarta also boasts of having one of the oldest golf courses in Asia, Rawamangun , also known as the Jakarta Golf Club. /location/location/contains Cities like New York grow in their unbuilding: demolition tends to precede development, most urgently and particularly in Lower Manhattan, where New York City began. /location/location/contains Table 4: Examples of challenging relation mentions. These examples benefit from the ability to capture more complex features. Relation arguments are marked in bold. pre-training. Peters et al. (2018) introduced embeddings from language models (ELMo), an approach to learn contextualized word representations by training a bidirectional LSTM to optimize a disjoint bidirectional language model objective. Their results show that replacing static pre-trained word vectors (Mikolov et al., 2013; Pennington et al., 2014) with contextualized word representations significantly improves performance on various natural language processing tasks, such as semantic similarity, coreference resolution, and semantic role labeling. Ruder and Howard (2018) found language representations learned by unsupervised language modeling to significantly improve text classification performance, to prevent overfitting, and to increase sample efficiency. Radford et al. (2018) demonstrated that general-domain pre-training and task-specific fine-tuning, which our model is based on, achieves state-of-the-art results on several question answering, text classification, textual entailment, and semantic similarity tasks. Devlin et al. (2018) further extended language model pre-training by introducing a slot-filling objective to jointly train a bidirectional language model. Most recently (Radford et al., 2019) found that considerably increasing the size of language models results in even better generalization to downstream tasks, while still underfitting large text corpora. 7 Conclusion We proposed DISTRE, a Transformer which we extended with an attentive selection mechanism for the multi-instance learning scenario, common in distantly supervised relation extraction. While DISTRE achieves a lower precision for the 300 top ranked predictions, we observe a state-of-the-art AUC and an overall more balanced performance, especially for higher recall values. Similarly, our approach predicts a larger set of distinct relation types with high confidence among the top predictions. In contrast to RESIDE, which uses explicitly provided side information and linguistic features, our approach only utilizes features implicitly captured in pre-trained language representations. This allows for an increased domain and language independence, and an additional error reduction because pre-processing can be omitted. In future work, we want to further investigate the extent of syntactic structure captured in deep language language representations. Because of its generic architecture, DISTRE allows for integration of additional contextual information, e.g. background knowledge about entities and relations, which could also prove useful to further improve performance. Acknowledgments We would like to thank the anonymous reviewers for their comments. This research was partially supported by the German Federal Ministry of Education and Research through the projects DEEPLEE (01IW17001) and BBDC2 (01IS18025E), and by the German Federal Ministry of Transport and Digital Infrastructure through the project DAYSTREAM (19F2031A). 1396 References Christoph Alt, Marc H¨ubner, and Leonhard Hennig. 2019. Improving relation extraction by pre-trained language representations. In Proceedings of the 2019 Conference on Automated Knowledge Base Construction, Amherst, Massachusetts. Razvan C. Bunescu and Raymond J. Mooney. 2005. A Shortest Path Dependency Kernel for Relation Extraction. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 724–731. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. Computing Research Repository (CoRR), abs/1810.04805. Miao Fan, Deli Zhao, Qiang Zhou, Zhiyuan Liu, Thomas Fang Zheng, and Edward Y. Chang. 2014. Distant Supervision for Relation Extraction with Matrix Completion. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 839– 849, Baltimore, Maryland. Association for Computational Linguistics. Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, and Peng Li. 2018. Hierarchical relation extraction with coarse-to-fine grained attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2236–2245. Association for Computational Linguistics. Zhengqiu He, Wenliang Chen, Zhenghua Li, Meishan Zhang, Wei Zhang, and Min Zhang. 2018. See: Syntax-aware entity embedding for neural relation extraction. In AAAI. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. Semeval-2010 Task 8: Multi-Way Classification of Semantic Relations between Pairs of Nominals. In SemEval@ACL. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 541–550, Portland, Oregon, USA. Association for Computational Linguistics. Heng Ji and Ralph Grishman. 2011. Knowledge base population: Successful approaches and challenges. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 1148–1158, Stroudsburg, PA, USA. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representations, abs/1412.6980. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural Relation Extraction with Selective Attention over Instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2124–2133, Berlin, Germany. Association for Computational Linguistics. Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018a. Generating wikipedia by summarizing long sequences. ICLR. Tianyi Liu, Xinsong Zhang, Wanhao Zhou, and Weijia Jia. 2018b. Neural relation extraction via innersentence noise reduction and transfer learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2195–2204. Association for Computational Linguistics. Tianyu Liu, Kexiang Wang, Baobao Chang, and Zhifang Sui. 2017. A soft-label method for noisetolerant distantly supervised relation extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1790–1795. Association for Computational Linguistics. Bingfeng Luo, Yansong Feng, Zheng Wang, Zhanxing Zhu, Songfang Huang, Rui Yan, and Dongyan Zhao. 2017. Learning with noise: Enhance distantly supervised relation extraction with dynamic transition matrix. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 430–439. Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. 2013. Distant Supervision for Relation Extraction with an Incomplete Knowledge Base. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 777–782, Atlanta, Georgia. Association for Computational Linguistics. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In ACL/IJCNLP. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. 1397 Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT. Pengda Qin, Weiran XU, and William Yang Wang. 2018. Dsgan: Generative adversarial training for distant supervision relation extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 496–505. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. available as a preprint. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling Relations and Their Mentions without Labeled Text. In Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD ’10). Sebastian Ruder and Jeremy Howard. 2018. Universal language model fine-tuning for text classification. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1715–1725. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In EMNLP-CoNLL. Richard Socher, Andrej Karpathy, Quoc V. Le, Christopher D. Manning, and Andrew Y. Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. TACL, 2:207– 218. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D. Manning. 2012. Multi-instance Multi-label Learning for Relation Extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 455–465, Stroudsburg, PA, USA. Association for Computational Linguistics. Joseph P. Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In ACL. Shikhar Vashishth, Rishabh Joshi, Sai Suman Prayaga, Chiranjib Bhattacharyya, and Partha Talukdar. 2018. Reside: Improving distantly-supervised neural relation extraction using side information. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1257–1266. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Patrick Verga, Emma Strubell, and Andrew McCallum. 2018. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 872–884. Association for Computational Linguistics. Guanying Wang, Wen Zhang, Ruoxu Wang, Yalin Zhou, Xi Chen, Wei Zhang, Hai Zhu, and Huajun Chen. 2018. Label-free distant supervision for relation extraction via knowledge graph embedding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2246–2255. Association for Computational Linguistics. Yi Wu, David Bamman, and Stuart Russell. 2017. Adversarial training for relation extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1778–1783. Association for Computational Linguistics. Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015a. Semantic relation classification via convolutional neural networks with simple negative sampling. In EMNLP. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015b. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1785–1794. Association for Computational Linguistics. Yadollah Yaghoobzadeh, Heike Adel, and Hinrich Sch¨utze. 2017. Noise mitigation for neural entity typing and relation extraction. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1183–1194. Association for Computational Linguistics. Mo Yu, Wenpeng Yin, Kazi Saidul Hasan, C´ıcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2017. Improved neural relation detection for knowledge base question answering. In ACL. 1398 Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1753–1762, Lisbon, Portugal. Association for Computational Linguistics. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335–2344. Dublin City University and Association for Computational Linguistics. Dongxu Zhang and Dong Wang. 2015. Relation classification via recurrent neural network. arXiv preprint arXiv:1508.01006. Ningyu Zhang, Shumin Deng, Zhanling Sun, Xi Chen, Wei Zhang, and Huajun Chen. 2018a. Attentionbased capsule networks with dynamic routing for relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 986–992. Association for Computational Linguistics. Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018b. Graph Convolution over Pruned Dependency Trees Improves Relation Extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205– 2215, Brussels, Belgium. Association for Computational Linguistics. Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. 2015 IEEE International Conference on Computer Vision (ICCV), pages 19– 27.
2019
134
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1399–1408 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1399 ARNOR: Attention Regularization based Noise Reduction for Distant Supervision Relation Classification Wei Jia, Dai Dai, Xinyan Xiao and Hua Wu Baidu Inc., Beijing, China {jiawei07, daidai, xiaoxinyan, wu hua} @baidu.com Abstract Distant supervision is widely used in relation classification in order to create large-scale training data by aligning a knowledge base with an unlabeled corpus. However, it also introduces amounts of noisy labels where a contextual sentence actually does not express the labeled relation. In this paper, we propose ARNOR, a novel Attention Regularization based NOise Reduction framework for distant supervision relation classification. ARNOR assumes that a trustable relation label should be explained by the neural attention model. Specifically, our ARNOR framework iteratively learns an interpretable model and utilizes it to select trustable instances. We first introduce attention regularization to force the model to pay attention to the patterns which explain the relation labels, so as to make the model more interpretable. Then, if the learned model can clearly locate the relation patterns of a candidate instance in training set, we will select it as a trustable instance for further training step. According to the experiments on NYT data, our ARNOR framework achieves significant improvements over state-of-the-art methods in both relation classification performance and noise reduction effect. 1 Introduction Relation Classification (RC) is a fundamental task in natural language processing (NLP) and is particularly important for knowledge base construction. The goal of RC (Zelenko et al., 2003) is to identify the relation type of a given entity pair in a sentence. Generally, a relation should be explicitly expressed by some clue words. See the first sentence in Figure 1. The phrase “was born in” explains the relation type “place of birth” for “Bill Lockyer” and “California”. Such indicating words is called patterns (Hearst, 1992; Hamon and Nazarenko, 2001). knowledge database sentences Relation place_of_birth California Bill Lockyer Entity 1 Entity 2 Bill Lockyer was born in California Bill Lockyer is an attorney general of California place_of_birth place_of_birth Align Figure 1: Two relation instances generated by distant supervision. The bold words “was born in” in s1 is the pattern that explains the relation type “place of birth”. Hence, this instance is correctly labeled. However, the second instance is noisy due to the lack of corresponding relation pattern. Before Enity1 Entity 1 Between Entity 2 After Entity 2 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Average Attention Figure 2: Average attention weights of BiLSTM+ATT model across five parts in the sentences on our test set. This model is trained using noisy data generated by distant supervision. It mainly pays attention to the input entity pair and ignores other words which might express the real relation. It also happens in Figure 1. This result comes from the fact that DS method only depends on entities for labeling data. In order to cheaply obtain a large amount of labeled RC training data, Distant Supervision (DS) (Mintz et al., 2009) was proposed to automatically generate training data by aligning a knowledge base with an unlabeled corpus. It is built on a weak assumption that if an entity pair have a relationship in a knowledge base, all sentences that contain this pair will express the corresponding relation. Unfortunately, DS obviously brings plenty of noisy data, which may significantly reduce the 1400 performance of an RC model. There may be no explicit relation pattern for identifying the relation. See the second sentence in Figure 1 for example. Mintz et al. (2009) reports that distant supervision may lead to more than 30% noisy instances. On the other hand, based on these noisy data, attention-based neural models often only attend to entity words but fail to attend to patterns (See Figure 2). There are mainly three kinds of methods for dealing with such noise problem. First, multiinstance learning (Riedel et al., 2010; Lin et al., 2016; Surdeanu et al., 2012; Zeng et al., 2015) relaxes the DS assumption as at-least-one. In a bag of sentences that mention the same entity pair, it assumes that at least one sentence expresses the relation. Multi-instance learning carries out classification on bag-level and often fails to perform well on sentence-level prediction (Feng et al., 2018b). Secondly, in order to reduce noise for sentence-level prediction, researchers then resort to reinforcement learning or adversarial training to select trustable data (Feng et al., 2018b; Qin et al., 2018a; Han et al., 2018; Xiangrong et al., 2018; Qin et al., 2018b). This line of research selects confident relation labels by matching the predicted label of the learned model with DSgenerated label. As the model is also learned from DS data, it might still fail when model predictions and DS-generated labels are both wrong. The third method relies on relation patterns. Pattern-based extraction is widely used in information extraction (Hearst, 1992; Hamon and Nazarenko, 2001). Among them, the generative model (Takamatsu et al., 2012) directly models the labeling process of DS and finds noisy patterns that mistakenly label a relation. Data programming (Ratner et al., 2016, 2017) fuses DS-based labels and manual relation patterns for reducing noise. In this paper, we propose ARNOR, a novel attention regularization based framework for noise reduction. ARNOR aims to train a neural model which is able to clearly explain the relation patterns through Attention Regularization (AR), and at the same time reduce noise based on an assumption: the clearer the model explain the relation in an instance, the more trustable this instance is. Specifically, our ARNOR framework iteratively learns the interpretable model and selects trustable instances. We first use attention regularization on the neural model to focus on relation patterns (Section 3.4 will introduce the patterns construction). Then, if the learned model can dicover patterns for candidate instances, we will select these candidates as correct labeled data for further training step. These two steps are mutually reinforced. The more interpretable the model is, the better training data is selected, and vice versa. In addition, most previous DS-based RC models are evaluated approximately on the test set which is split from the training set and thus is also full of noisy data. We argue that this might not be the best choice. Instead, we use a recently released sentence-level test set (Ren et al., 2017) for evaluation. However, there also exist several problems in this test set (see Sec. 4.1). We come up with a revised version that is larger and more precise. Overall, the contribution is as follows: 1. We propose a novel attention regularization method for reducing the noise in DS. Our method forces the model to clearly explain the relation patterns in terms of attention, and selects trustable instances if they can be explained by the model. 2. Our ARNOR framework achieves significant improvement over state-of-the-art noise reduction methods, in terms of both RC performance and noise reduction effect. 3. We publish a better manually labeled sentence-level test set1 for evaluating the performance of RC models. This test set contains 1,024 sentences and 4,543 entity pairs, and is carefully annotated to ensure accuracy. 2 Related Work We deal with DS-based RC in this paper. For RC task, various models are recently proposed based on different neural architectures, such as convolutional neural networks (Zeng et al., 2014, 2015) and recurrent neural network (Zhang et al., 2015; Zhou et al., 2016). To automatically obtain a large training dataset, DS has been proposed (Mintz et al., 2009). However, DS also introduces noisy data, making DS-based RC more challenging. Previous studies make attempts on kinds of methods to solve the noise problem. The first widely studied method is based on multi-instance 1The dataset used in this paper is on https: //github.com/PaddlePaddle/models/tree/ develop/PaddleNLP/Research/ACL2019-ARNOR 1401 Embeddings BiLSTM Attention Relation Output FC Bill born in Clifornia was Lockyer . Target Attention Target Attention Dataset Pattern Set Epoch k-1 Epoch k Dataset Pattern Set Confident New Confident New Confident Confident Unconfident Training Attention Regularization Attention Based Confidence Calculation Instances Selecting Figure 3: An overview of our ARNOR framework. It is based on a BiLSTM with attention mechanism and utilizes attention regularization to force the model to attend the corresponding relation patterns. Then, an instance selector calculates a confidence score for each training instance to generate a new redistributed training set and a new trustable pattern set. These two steps are run iteratively to form a bootstrap learning procedure. learning (Riedel et al., 2010; Lin et al., 2016; Surdeanu et al., 2012; Zeng et al., 2015). However, it models noise problem on a bag of instances and is not suitable for sentence-level prediction. The second kind of approach utilizes RL (Feng et al., 2018b; Xiangrong et al., 2018; Qin et al., 2018b) or adversarial training (Qin et al., 2018a; Han et al., 2018) to select trustable instances. The third research line relies on patterns (Hearst, 1992; Hamon and Nazarenko, 2001). Takamatsu et al. (2012) directly models the labeling process of DS to find noisy patterns. Ratner et al. (2016, 2017) proposes to fuse DS-based labels and manual relation patterns for reducing noise. Feng et al. (2018a) presents a pattern extractor based on RL and uses extracted patterns as features for RC. 3 The ARNOR Framework In this paper, we reduce DS noise and make the model more interpretable according to the observation that a relation should be expressed by its sentence context. Generally, RC classifier should rely on relation patterns to decide the relation type for a pair of entities. Thus, for a training instance, if such an interpretable model cannot attend to the pattern that expresses the relation type, it is possible that this instance is a noise. Our ARNOR Framework consists of two parts: attention regularization training and instance selection. First, we hope the model is capable of locating relation patterns. Thus, attention regularization is applied to guide the training of the model, forcing it to pay attention to given pattern words. Then, we select instances by checking whether the model can give a clear explanation for the relation label generated by DS. These two steps will be repeated in a bootstrap procedure. We illustrate our method in Figure 3. 3.1 Attention-based BiLSTM Encoder In order to capture the key feature words for identifying relations, we apply an attention mechanism over a BiLSTM Encoder, which is first introduced in (Zhou et al., 2016) for RC. The model architecture is illustrated on the left side of Figure 3. Input Embeddings. The input embeddings consist of three parts: word embedding, position embedding, and entity type embedding. Position embedding is first proposed by Zeng et al. (2014) to incorporate position information of input entity pair and has been widely used in the following RC models. We also introduce entity type information by looking up an entity type embedding matrix. The final input embeddings are a concatenation of these embeddings, and are fed to a bidirectional Long Short Term Memory (BiLSTM) with an attention mechanism to generate sentence representation. Attention-based BiLSTM. Let H = {hi} denotes the hidden vectors of BiLSTM encoder. The final sentence representation u is a weighted sum of 1402 these vectors, M = tanh(H) a = softmax(wT M) u = HaT (1) where wT is a trained parameter vector. It is demonstrated that attention mechanism is helpful in capturing important features for classification tasks. However, for noisy data generated by distant supervision, it almost only focuses on entities, but neglects relation patterns which are more informative for RC. 3.2 Training with Attention Regularization Attention Regularization (AR) aims to teach the model to attend to the relation patterns for identifying relations. Given a T-word sentence s = {xi}T i=1, a pair of entities (e1, e2) in the s, a relation label y, and a relation patterns m that explains the relation y of e1 and e2. (Section 3.4 will introduce the construction of relation patterns m). We are able to calculate an attention guidance value am, according to pattern mention significance function q(z|s, e1, e2, m) conditional on the input m. Here z represents the pattern words in a sentence. We hope that the classifier can approximate its attention distribution as = p(z|s) to am, where p represents the classifier network. Intuitively, we apply KL (Kullback–Leibler divergence) as the optimized function, which describes the differences between distributions: KL(am||as) = X am log am as (2) What is more, the Equation 2 can be further reduced as following: lossa = X am log am as = X (am log am −am log as) (3) where lossa represents the loss of attention regularization. Because am contains fixed values, the equation is equal to lossa = − X am log as (4) Therefore, we adapt lossa into classification loss lossc to regularize attention learning. The final loss is loss = lossc + βlossa (5) where β is a weight for lossa, which is generally set as 1 in our experiments. In this paper, we implement a fairly simple function to generate am. bi = ( 1 xi ∈{e1, e2, m} 0 else am = ( bk PT i=1 bi )T k=1 (6) Here b denotes that whether xi belongs to entity words and relation pattern words. 3.3 Instance Selection with Attention from Model Based on attention mechanism, a trained RC model can tells us the importance of each word for identifying the relation type. For a training instance, if the relation pattern words that the model focuses on do not match the pattern m which explains the relation type, this instance is probably a false positive. Here we still apply KL to measure the probability that an instance is a false positive. Given the attention weights as from the RC model and am calculated by Equation 6, the confidence score c of an instance is normalized by c = 1 1 + KL(am||as) (7) The higher c is, the more confident an instance is. We calculate the confidence score for all instances in the training set and select instances whose score is more than a threshold ct, which is a hyperparameter. 3.4 Bootstrap Learning Procedure In our ARNOR framework, an important problem is how to acquire relation patterns m in model training and instance selecting step. In the model training step, we need more precise patterns in order to guide the model to attend to important evidence for RC. While in the instance selection step, more various patterns are required so as to select more trustable data as well as to discover more confident relation patterns. Here we will simply define the process of the bootstrap learning steps. In model training, given 1) a pattern extractor E which can extract a relation patterns from an instance, 2) an initial trustable pattern set M (which might be manually collected or simply counted up from original training dataset D using E). First, 1403 Algorithm 1 The ARNOR Framework Require: DS dataset D, a relation classifier C with parameters θθθ 1: Collect high frequency patterns from D into M 2: Redistribute D by M 3: loop 4: Train classifier C with D and M 5: Update parameters θθθ by Attention Regularization 6: Get confident score c by C for D 7: Update M by high score c from D 8: Redistribute D by new M 9: end loop we redistribute training dataset D based on M (described below). Then, the RC model is trained for epochs only using m in M. Next, instance selection is run on D to select more confident training data. These new trustable instances are fed to E to figure out new trustable patterns and put them into M. We repeat such a bootstrap procedure until the F1 score on dev set does not increase. This bootstrap procedure is detailed in Algorithm 1. Relation Pattern Extraction. Another problem is how to build a relation pattern extractor E to extract a pattern from an instance. However, we find it is not quite critical. Even though we use a very simple method, we still achieve considerable improvement. It is certain that a more complicated and well-performed extractor will bring additional improvement. This will be one of our future work. Our pattern extractor E simply takes the words between two entities as a relation pattern. For the building of the initial pattern set M, we extract relation patterns from all instances in original training dataset and count them up. M is initially built by selecting patterns with occurrences. We retain top 10% (maximum 20) patterns for each relation type. Data Redistribution. After the trustable pattern set M is built, dataset D will be redistributed using these patterns. All positive instances that are not matched these patterns will be put into the negative set, revising their relation label to ‘None’. We will explain the reason for data redistributing in our experiment section. NYT Training Test #Sentences 235,253 1,024 #Instances 371,461 4,543 #Positive instances 110,518 671 Table 1: Statistics of the dataset in our experiments. NYT Training Test #/location/location/contains 60,215 317 #/people/person/nationality 8,349 66 #/location/country/capital 7,959 13 #/people/person/place lived 7,438 148 #/business/person/company 5,788 84 #/location/nei.../neighborhood of 5,737 1 #/people/person/place of birth 3,279 14 #/people/person/place of death 2,002 9 #/business/company/founders 827 11 #/people/person/children 523 8 Table 2: The 10 relation types we retain and statistics of them in the dataset. The distribution of some relation types are distinct in test set because they are much more noisy. 4 Experiments 4.1 Dataset and Evaluation We evaluate the proposed ARNOR framework on a widely-used public dataset: NYT, which is a news corpus sampled from 294k 1989-2007 New York Times news articles and is first presented in (Riedel et al., 2010). Most previous work commonly generates training instances by aligning entity pairs from Freebase and adopt held-out evaluation to evaluate without costly human annotation. Such an evaluation can only provide an approximate measure due to the noisy test set that is also generated by distant supervision. In contrast, Ren et al. (2017) publishes a training set which is also generated by distant supervision, but a manuallyannotated test set that contains 395 sentences from Hoffmann et al. (2011). However, we find that this test set was annotated with only one entity pair for one sentence. Not all of the triplets in these sentences are marked out. In addition, although there are enough test instances (3,880 including “None” type), the number of positive ones is relatively small (only 396). Moreover, the test set only contains half of the relation types of the training set. To address these issues and evaluate our ARNOR framework more precisely, we annotate and publish a new sentence-level test set (the source address is in section 1) on the basis of the one released by Ren et al. (2017), which also con1404 Method Dev Test Prec. Rec. F1 Prec. Rec. F1 CNN (Zeng et al., 2014) 38.32 65.22 48.28 35.75 64.54 46.01 PCNN (Zeng et al., 2015) 36.09 63.66 46.07 36.06 64.86 46.35 BiLSTM 36.71 66.46 47.29 35.52 67.41 46.53 BiLSTM+ATT 37.59 64.91 47.61 34.93 65.18 45.48 PCNN+SelATT (Lin et al., 2016) 46.01 30.43 36.64 45.41 30.03 36.15 CNN+RL1 (Qin et al., 2018b) 37.71 52.66 43.95 39.41 61.61 48.07 CNN+RL2 (Feng et al., 2018b) 40.00 59.17 47.73 40.23 63.78 49.34 ARNOR (Ours) 62.45 58.51 60.36 65.23 56.79 60.90 Table 3: Comparison of our method and other baselines. The first three methods are normal RC model, and the middle three baselines are models for distant supervision RC. tains annotated named entity types. Firstly, we revise mislabeled instances on the original 395 testing sentences. Then, about 600 sentences are sampled and removed from the original training set. We carefully check their labels and merge them into the test set. We also remove some of the relation types which are overlapping and ambiguous or are too noisy to obtain a non-noise test sample. The details of this dataset and the relation types we used is shown in Table 1 and Table 2. For evaluation, we evaluate our framework on sentence-level (or instance-level). Sentence-level prediction is more friendly with comprehend sentence tasks, like question answering and semantic parsing (Feng et al., 2018b). Different from commonly-used bag-level evaluation, a sentencelevel evaluation compute Precision (Prec.), Recall (Rec.) and F1 metric directly on all of the individual instances in the dataset. We think such an evaluation is more intuitive and suitable for a realworld application. 4.2 Baselines We compare our ARNOR framework with several strong baselines for noise reduction as follows: PCNN+SelATT (Lin et al., 2016) is a bag-level RC model. It adopts an attention mechanism over all sentences in a bag and thus can reduce the weight of noise data. CNN+RL2 (Feng et al., 2018b) is a novel reinforcement learning (RL) based model for RC from noisy data. It jointly trains a CNN model for RC as well as an instance selector to remove unconfident samples. CNN+RL1 (Qin et al., 2018b) also introduces RL to heuristically recognize false positive instances. Different from Feng et al. (2018b), they redistribute false positives into negative samples instead of removing them. Meanwhile, to demonstrate the effectiveness of RC after denoising, several non-denoising methods are also used for comparison. CNN (Zeng et al., 2014) is a widely-used architecture for RE. It introduces position embeddings to represent the location of an input entity pair. PCNN (Zeng et al., 2015) is a revision of CNN which uses piecewise max-pooling to extract more relation features. BiLSTM (Zhang et al., 2015) is also commonly used for RE with the help of position embeddings. BiLSTM+ATT (Zhou et al., 2016) adds an attention mechanism into BiLSTM to capture the most important features for identifying relations. It is the base model used in our ARNOR framework. 4.3 Implementation Details For our model and other BiLSTM-based baselines, the word embeddings are randomly initialized with 100 dimensions. The position embeddings and entity type embeddings are randomly initialized with 50 dimensions. The size of BiLSTM hidden vector is set to 500. In attention regularization training, parameter β is set to 1. We set the learning rate as 0.001 and utilize Adam for optimization. To better evaluate our models, we averagely split the test dataset into a development set and a testing set. In instance selection step, an appropriate confidence score threshold is set to 0.5 that should be various in other datasets. And we take max 5 new patterns in a loop for each relation type. In bootstrap procedure, we run 10 epochs in the first loop, and 1 epoch in the rest loops until the classification performance on dev set dose not increase. Generally, the bootstrap procedure end 1405 Model Prec. Rec. F1 BiLSTM+ATT 34.93 65.18 45.48 + IDR 70.95 40.57 51.63 + ART 68.70 50.99 58.52 + BLP 65.23 56.79 60.90 Table 4: Evaluation of components in our framework. BiLSTM+ATT is the base model without reducing noise. IDR stands for initial data redistributing using initial confident pattern set. ART denotes attention regularization training for the first loop. BLP stands for bootstrap learning procedure. Model Prec. Rec. F1 CNN 35.75 64.54 46.01 CNN+RL2 40.23 63.78 49.34 CNN+IDR 84.87 39.94 54.32 CNN+IDR+RL2 83.63 44.27 57.89 Table 5: Results of CNN+RL2 (Feng et al., 2018b) starts with a pre-trained CNN model using initial data redistributing (IDR). CNN+IDR is the model trained on initially redistributed data and CNN+IDR+RL2 applies RL2 on pre-trained CNN+IDR model. in 5 loops. For CNN-based baselines, we use the same embedding settings. The window size of the convolution layer is set to 3 and the number of the filter is set to 230. All the baselines for noise reduction were implemented with the source codes released by their authors. 4.4 Main Results We compare the results of ARNOR with nondenoising baselines and denoising baselines. As shown in Table 3, ARNOR significantly outperforms all of the baselines in both precision and F1 metric, obtaining about 11% F1 improvement over the state-of-the-art CNN+RL2. Note that our model achieves a tremendous improvement on precision without too much decline of recall. This demonstrates the proposed framework can effectively reduce the impact of noisy data. Besides, PCNN+SelATT performs the worst among all of the baselines. We think that it is because PCNN+SelATT is a bag-level method and is not suitable for sentence-level evaluation, which is consistent with Feng et al. (2018b). Noise Reduction Prec. Rec. F1 CNN+RL2 40.58 96.31 57.10 ARNOR 76.37 68.13 72.02 Table 6: Comparison of effectiveness on noise reduction. We randomly sample 200 sentences (529 instances) from the training set. After manually checking, 213 of them are not noise. We use these samples to evaluate the capability of reducing noise. 5 Analysis and Discussion 5.1 Effects of components In order to find which component contributes to our framework, we evaluate our model by adding each of the components. The results are shown in Table 4. BiLSTM+ATT is the baseline model that is trained by original noisy data. After using the initial redistributed dataset, which is generated by the method described in the above section, the BiLSTM+ATT model achieves about 6% improvement in F1. And the precision sharply increases by about 26%. This demonstrates that the DS dataset contains a large proportion of noise. Even such a simple filtering noise method can effectively improve model performance. However, this simple method seriously affects recall. On the one hand, amounts of true positives with long-tail patterns will be mistakenly regarded as false negatives. And we guess some relation patterns in training data are too rare to make the model learn to attend them. Therefore, after we add attention regularization to the model, the recall increases by about 10% with only 2% decline in precision. As a result, our model achieves another 7% F1 improvement. We believe this is the power of guiding the model to understand which words are more crucial for identifying relations. After we obtain an initial model trained by attention regularization, we continue the bootstrap learning procedure and finally achieve 2.4% F1 improvement. In this procedure, ARNOR will collect more confident longtail patterns to improve the recall of the model. 5.2 Start with small clean or large noisy data In the previous section, we have found that the initial redistributed dataset (with small but clean positive data) helps the model improve a lot. On the contrary, the previous neural network-based model for distant supervision RC, including all baselines in this paper, usually starts with the original dataset which is large but noisy. Which is the 1406 Jim Kimsey , a founder of AOL ; Jack Valenti , former head … 0.36 0.19 Entity 1: AOL Entity 2: Jim Kimsey Relation: /business/company/founders BiLSTM+ATT Jim Kimsey , a founder of AOL ; Jack Valenti , former head … 0.12 0.13 0.14 0.12 ARNOR … said Senomyx ’s chief executive , Kent Snyder . 0.30 0.03 0.31 … said Senomyx ’s chief executive , Kent Snyder . 0.15 0.15 0.13 0.13 0.11 Entity 1: Kent Snyder Entity 2: Senomyx Relation: /business/person/company Table 7: Here is attention cases with a heat map. These cases have shown our model’s ability to locating relation indicators. Based on attention supervision, our model can concentrate on relation patterns and entities. High Frequency Pattern Long Tail /people/person/children #Occ 7 e2 , the son of e1 4 e2 , daughter of e1 1 e1 's youngest son , e2 1 e2 , the son of Secretary General e1 1 e2 , a daughter of Representative e1 High Frequency Pattern Long Tail /business/person/company #Occ 74 e2 secretary general , e1 68 e1 , the chairman of e2 67 e1 , chief executive of e2 4 e1 , the secretary general of the e2 3 e1 , the chief executive of the e2 3 e1 , the oil minister of e2 2 e1 , the former chief executive of e2 2 e1 , the vice chairman of e2 Table 8: Pattern set cases. This table has shown some high frequency and top long tail patterns discovered by our model in pattern bootstrap. better choice? In order to figure it out, we use the same initial redistributed dataset to pre-train the CNN which is used in the CNN+RL2 and then apply RL2 procedure for noise reduction on the original noisy dataset. We report the results in Table 5. The pre-trained PCNN also achieves a significant improvement, and after further denoising by RL2, CNN+RL2 finally obtain 57.89% in F1, which is still 3% lower than the performance of our model. Therefore, we consider that starting the model with a small but clean dataset might be a choice for noise reduction. 5.3 Effects of Noise Reduction The instance selector in our ARNOR framework calculates a confidence score for each instance in the training set by checking whether the attention weights matches a given pattern. Then we utilize this confidence score to reduce noise. In order to verify the capability of reducing noise, we randomly sample 200 sentences to annotate whether they are noise and use them to evaluate the accuracy of noise reduction. We compare the results with CNN+RL2 in Table 6. The ARNOR significantly outperforms CNN+RL2 on percision and obtains a 14.92% F1 improvement. 5.4 Case Study Our ARNOR is able to make the RC model more interpretable through attention regularization training. To verify this point, we select some instances from the test set and visualize their attention weights for a case study. As shown in Table 7, BiLSTM+ATT which is trained on original noisy data only focuses on the entity pairs, and makes wrong predictions on these cases. This is probably because the model does not learn the key evidence for RC. While ARNOR can perfectly capture the important features and correctly predict the relation. In addition, we also check the confident patterns which are discovered in bootstrap learning. As presented in Table 8, the high-frequency patterns can be easily obtained by initially building of confident pattern set, and after bootstrap learning, we can discover more long-tail patterns, most of which are representative and meaningful. More importantly, some of these additional patterns are not similar in literal terms, demonstrating the model might learn the semantic correlation among related feature words. 6 Conclusion We propose ARNOR, an attention regularizationbased noise reduction framework for distant supervision relation classification. We find relation pattern is an important feature but is rarely captured by the previous model trained on noisy data. Thus, we design attention regulation to help the model learn the locating of relation patterns. With a more interpretable model, we then conduct noise reduction by evaluating how well the model explains the relation of an instance. A bootstrap learning procedure is built to iteratively improve the model, training data and trustable pattern set. With a very simple pattern extractor, we outperform several strong RL-based baselines, achieving 1407 significant improvements on both relation classification and noise reduction. In addition, we publish a better manually labeled test set for sentence-level evaluation. In the future, we hope to improve our work by the utilization of better model-based pattern extractor, and resorting to latent variable model (Kim et al., 2018) for jointly modeling instance selector. What is more, we also hope to verify the effectiveness of our method on more tasks, including open information extraction and event extraction, and also overlapping relation extraction models (Dai et al., 2019). Acknowledgments This work was supported by the Natural Science Foundation of China (No. 61533018). References Dai Dai, Xinyan Xiao, Yajuan Lyu, Qiaoqiao She, Shan Dou, and Haifeng Wang. 2019. Joint extraction of entities and overlapping relations using positionattentive sequence labeling. In Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019), Honolulu, USA, January 27, 2019. Jun Feng, Minlie Huang, Yijie Zhang, Yang Yang, and Xiaoyan Zhu. 2018a. Relation mention extraction from noisy data with hierarchical reinforcement learning. arXiv preprint arXiv:1811.01237. Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xiaoyan Zhu. 2018b. Reinforcement learning for relation classification from noisy data. In Proceedings of AAAI. Thierry Hamon and Adeline Nazarenko. 2001. Detection of synonymy links between terms: experiment and results. Recent advances in computational terminology, 2:185–208. Xu Han, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distant supervision for relation extraction via instance-level adversarial training. arXiv preprint arXiv:1805.10959. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In COLING 1992 Volume 2: The 15th International Conference on Computational Linguistics. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesVolume 1, pages 541–550. Association for Computational Linguistics. Yoon Kim, Sam Wiseman, and Alexander M Rush. 2018. A tutorial on deep latent variable models of natural language. arXiv preprint arXiv:1812.06834. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2124–2133. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003–1011. Association for Computational Linguistics. Pengda Qin, Weiran Xu, and William Yang Wang. 2018a. Dsgan: Generative adversarial training for distant supervision relation extraction. arXiv preprint arXiv:1805.09929. Pengda Qin, Weiran Xu, and William Yang Wang. 2018b. Robust distant supervision relation extraction via deep reinforcement learning. arXiv preprint arXiv:1805.09927. Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher R´e. 2017. Snorkel: Rapid training data creation with weak supervision. Proceedings of the VLDB Endowment, 11(3):269–282. Alexander J Ratner, Christopher M De Sa, Sen Wu, Daniel Selsam, and Christopher R´e. 2016. Data programming: Creating large training sets, quickly. In Advances in neural information processing systems, pages 3567–3575. Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R Voss, Heng Ji, Tarek F Abdelzaher, and Jiawei Han. 2017. Cotype: Joint extraction of typed entities and relations with knowledge bases. In Proceedings of the 26th International Conference on World Wide Web, pages 1015–1024. International World Wide Web Conferences Steering Committee. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 148–163. Springer. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pages 455–465. Association for Computational Linguistics. 1408 Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2012. Reducing wrong labels in distant supervision for relation extraction. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 721–729. Association for Computational Linguistics. Zeng Xiangrong, Liu Kang, He Shizhu, Zhao Jun, et al. 2018. Large scaled relation extraction with reinforcement learning. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Journal of machine learning research, 3(Feb):1083–1106. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1753– 1762. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335–2344. Shu Zhang, Dequan Zheng, Xinchen Hu, and Ming Yang. 2015. Bidirectional long short-term memory networks for relation classification. In Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation, pages 73–78. Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attentionbased bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 207–212.
2019
135
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1409–1418 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1409 GraphRel: Modeling Text as Relational Graphs for Joint Entity and Relation Extraction Tsu-Jui Fu Academia Sinica [email protected] Peng-Hsuan Li Academia Sinica [email protected] Wei-Yun Ma Academia Sinica [email protected] Abstract In this paper, we present GraphRel, an end-to-end relation extraction model which uses graph convolutional networks (GCNs) to jointly learn named entities and relations. In contrast to previous baselines, we consider the interaction between named entities and relations via a relation-weighted GCN to better extract relations. Linear and dependency structures are both used to extract both sequential and regional features of the text, and a complete word graph is further utilized to extract implicit features among all word pairs of the text. With the graph-based approach, the prediction for overlapping relations is substantially improved over previous sequential approaches. We evaluate GraphRel on two public datasets: NYT and WebNLG. Results show that GraphRel maintains high precision while increasing recall substantially. Also, GraphRel outperforms previous work by 3.2% and 5.8% (F1 score), achieving a new state-of-the-art for relation extraction. 1 Introduction Extracting pairs of entity mentions with semantic relations, i.e., triplets such as (BarackObama, PresidentOf, UnitedStates), is a central task in information extraction and allows automatic knowledge construction from unstructured text. Though important and well-studied, three key aspects are yet to be fully handled in an unified framework: • End-to-end joint modeling of entity recognition and relation extraction; • Prediction of overlapping relations, i.e., relations that share a common mention; • Consideration of the interaction between relations, especially overlapping relations. Traditionally, a pipelined approach is used to first extract entity mentions using a named entity recognizer and then predict the relations between every pair of extracted entity mentions (Zelenko et al., 2003; Zhou et al., 2005; Chan and Roth, 2011). Joint entity recognition and relation extraction models (Yu and Lam, 2010; Li and Ji, 2014; Miwa and Sasaki, 2014; Ren et al., 2017) have been built to take advantage of the close interaction between these two tasks. While showing the benefits of joint modeling, these complicated methods are feature-based structured learning systems and hence rely heavily on feature engineering. With the success of deep neural networks, NNbased automatic feature learning methods have been applied to relation extraction. These methods use CNN, LSTM, or Tree-LSTM on the word sequence between two entity mentions (Zeng et al., 2014; dos Santos et al., 2015), the shortest dependency paths between two entity mentions (Yan et al., 2015; Li et al., 2015), or the minimal constituency sub-tree spanning two entity mentions (Socher et al., 2012) to encode relevant information for each pair of entity mentions. However, these methods are not end-to-end joint modeling of entities and relations. They assume entity mentions are given and are expected to degrade significantly in performance when a named entity recognizer is needed in the pipeline for real world usage. Another challenge for relation extraction is how to take into account the interaction between relations, which is especially important for overlapping relations, i.e., relations sharing common entity mentions. For example, (BarackObama, PresidentOf, UnitedStates) can be inferred from (BarackObama, Governance, UnitedStates); the two triplets are said to exhibit EntityPairOverlap. Another case is that the former triplet could also be inferred from (BarackObama, LiveIn, WhiteHouse) and (WhiteHouse, PresidentialPalace, UnitedStates), where the latter two are said to exhibit SingleEntityOverlap. Although common in knowledge base completion, such interaction, whether via direct deduction or indirect 1410 evidence, is particularly difficult for joint entity recognition and relation extraction models, where entities are not present in the input. Indeed, although Zheng et al. (2017) propose a strong neural end-to-end joint model of entities and relations based on an LSTM sequence tagger, they have to completely give up overlapping relations. In this paper, we propose GraphRel, a neural end-to-end joint model for entity recognition and relation extraction that is the first to handle all three key aspects in relation extraction. GraphRel learns to automatically extract hidden features for each word by stacking a Bi-LSTM sentence encoder and a GCN (Kipf and Welling, 2017) dependency tree encoder. Then GraphRel tags entity mention words and predicts relation triplets that connect mentions, where is the 1st-phase prediction. To gracefully predict relation triplets while taking into account the interactions between them, we add a novel 2nd-phase relation-weighted GCN to GraphRel. Already guided by both entity loss and relation loss, the 1st-phase GraphRel extracts node hidden features along dependency links while establishing a new fully connected graph with relation-weighted edges. Then, by operating on the intermediate graph, the 2nd-phase GCN effectively considers the interaction between entities and (possibly overlapping) relations before the final classification for each edge. With GraphRel, our contribution is threefold: • Linear and dependency structures, as well as implicit features among all word pairs of the text, are considered by our method; • We perform end-to-end, joint modeling of entities and relations while considering all word pairs for prediction; • The interaction between entities and relations is carefully considered. We evaluate the method on two public relation extraction datasets: NYT and WebNLG. The experimental result shows that GraphRel substantially improves the overlapping relations over previous work, and achieves a new state-of-the-art on both datasets. 2 Related Work The BiLSTM-GCN encoder part of our model resembles the BiLSTM-TreeLSTM model proposed by Miwa and Bansal (2016), as they also stack a dependency tree on top of sequences to jointly model entities and relations. They use Bi-LSTM on each sentence for automatic feature learning, and the extracted hidden features are shared by a sequential entity tagger and a shortest dependency path relation classifier. However, while introducing shared parameters for joint entity recognition and relation extraction, they must still pipeline the entity mentions predicted by the tagger to form mention pairs for the relation classifier. Instead of trying to classify each mention pair as in previous work, Zheng et al. (2017) formulate relation extraction as a sequential tagging problem (NovelTagging) as with entity recognition. This allows them to model relation extraction by an LSTM decoder on top of a Bi-LSTM encoder. However, while showing promising results on the NYT dataset, their strength comes from focusing on isolated relations and completely giving up overlapping relations, which are relatively rare in the dataset. In comparison, the proposed GraphRel gracefully handles all types of relations while being end-to-end and jointly modeling entity recognition. Zeng et al. (2018) then propose an end-to-end sequence-to-sequence model for relation extraction. They encode each sentence by a Bi-LSTM, and use the last encoder hidden state to initialize one (OneDecoder) or multiple (MultiDecoder) LSTMs for dynamic decoding relation triplets. When decoding, triplets are generated by selecting a relation and copying two words from the sentence. The seq2seq setup partially handles interaction between triplets. However, interactions between relations are only unidirectionally captured by considering previous generated triplets with a compulsory linear order when generating a new one. Instead, in this paper, we propose propagating entity and relation information on a word graph with automatically learned linkage by applying 2nd-phase GCN on top of the LSTM-GCN encoder. Recently, considering dependency structure by GCN has been used in many natural language processing (NLP) tasks. Marcheggiani and Titov (2017) applies GCN on word sequences for semantic role labeling. Liu et al. (2018) encode long documents via GCN to perform text matching. Cetoli et al. (2016) combine RNN and GCN to recognize named entities. There are also some works (Peng et al., 2017; Zhang et al., 2018; Qian et al., 1411 Figure 1: Graph Convolutional Network (GCN) 2019; Luan et al., 2019) about considering dependency structure of word sequence for relation extraction. In our proposed GrpahRel, we not only stack Bi-LSTM and GCN to consider both linear and dependency structures but also adopt a 2ndphase relation-weighted GCN to further model the interaction between entities and relations. 3 Review of GCN As convolutional neural network (CNN), Graph Convolutional Network (GCN) (Kipf and Welling, 2017) convolves the features of neighboring nodes and also propagates the information of a node to its nearest neighbors. Shown in Fig. 1, by stacking GCN layers, GCN can extract regional features for each node. A GCN layer retrieves new node features by considering neighboring nodes’ features with the following equation: hl+1 u = ReLU  X v∈N(u)  W lhl v + bl  , where u is the target node and N (u) represents the neighborhood of u, including u itself; hl v denotes the hidden feature of node v at layer l; W and b are learnable weights, mapping the feature of a node onto adjacent nodes in the graph; and h ∈Rf, W ∈Rf×f, and b ∈Rf, where f is the feature size. 4 Methodology The overall architecture of the proposed GraphRel which contains 2 phases prediction is illustrated in Fig. 2. In the 1st-phase, we adopt bi-RNN and GCN to extract both sequential and regional dependency word features. Given the word features, we predict relations for each word pair and the entities for all words. Then, in 2nd-phase, based on the predicted 1st-phase relations, we build complete relational graphs for each relation, to which we apply GCN on each graph to integrate each relation’s information and further consider the interaction between entities and relations. 4.1 1st-phase Prediction As the state-of-the-art text feature extractor (Marcheggiani and Titov, 2017; Cetoli et al., 2016), to take into account both sequential and regional dependencies, we first apply bi-directional RNN to extract sequential features and then use bidirectional GCN to further extract regional dependency features. Then, based on the extracted word features, we predict the relation for each word pair along with the word entity. 4.1.1 Bi-LSTM We use the well-known long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) as our bi-RNN cell. For each word, we combine the word embedding and part-of-speech (POS) embedding as the initial feature: h0 u = Word(u) ⊕POS(u), where h0 u represents the initial feature of word u, and Word(u) and POS(u) are the word and POS embedding of word u, respectively. We use pretrained word embeddings from GloVe (Pennington et al., 2014), and the POS embedding is randomly initialized for training with the whole GraphRel. 4.1.2 Bi-GCN Since the original input sentence is a sequence and has no inherent graph structure to speak of, as Cetoli et al. (2016), we use dependency parser to create a dependency tree for the input sentence. We use the dependency tree as the input sentence’s adjacency matrix and use GCN to extract regional dependency features. The original GCN was designed for undirected graphs. To consider both incoming and outgoing word features, we follow Marcheggiani and Titov (2017) and implement bi-GCN as → hl+1 u = ReLU    X v∈ → N(u)  → W l hl v+ → b l    ← hl+1 u = ReLU    X v∈ ← N(u)  ← W l hl v+ ← b l    hl+1 u = → hl+1 u ⊕ ← hl+1 u , 1412 Figure 2: Overview of GraphRel with 2nd-phase relation-weighted GCN. where hl u represents the hidden features of word u at layer l, → N (u) contains all words outgoing from word u, and ← N (u) contains all words incoming to word u, both including word u itself. W and b are both learnable convolutional weights. → W, → b and ← W, ← b also represent the outgoing weight and incoming weight, respectively. We concatenate both outgoing and incoming word features as the final word feature. 4.1.3 Extraction of Entities and Relations With the extracted word features from bi-RNN and bi-GCN, we predict the word entity and extract the relation for each word pair. For the word entity, we predict for all words according to the word features over 1-layer LSTM and apply categorical loss, denoted as eloss1p, to train them. For relation extraction, we remove the dependency edges and do prediction for all word pairs. For each relation r, we learn weight matrices W 1 r , W 2 r , W 3 r and calculate the relation tendency score S as S(w1,r,w2) = W 3 r ReLU W 1 r hw1 ⊕W 2 r hw2  , where S(w1,r,w2) represents the relation tendency score for (w1, w2) under relation r and (w1, w2) refers to the word pair. Note that S(w1,r,w2) should be different from S(w2,r,w1). For word pair (w1, w2), we calculate all of the pair’s relation tendency scores, including non-relation, and denote it as S(w1,null,w2). We apply the softmax function to S(w1,r,w2), yielding Pr(w1, w2), which represents the probability of each relation r for (w1, w2). Since we extract relations for each word pair, our design includes no triplet count restrictions. By investigating the relations for each word pair, Figure 3: Relation-weighted graph for each relation. GraphRel identifies as many relations as possible. With Pr(w1, w2), we can also calculate the relation categorical loss here, denoted as rloss1p. Please note that though both eloss1p and rloss1p will not be used as final prediction, they are also good auxiliary loss for training 1st-phase GraphRel. 4.2 2nd-phase Prediction The extracted entities and relations in 1st-phase do not take each other into account. To consider interaction between named entities and relations, and to take account implicit features among all word pairs of the text, we present a novel 2nd-phase relation-weighted GCN for further extraction. 4.2.1 Relation-weighted Graph After 1st-phase prediction, we build complete relation-weighted graph for each relation r where the edge of (w1, w2) is Pr(w1, w2), as shown in Fig. 3. Then, 2nd-phase adopts bi-GCN on each relation graph which considers different influenced degrees of different relations and aggregates as the comprehensive word feature. The process can be 1413 represented as hl+1 u = ReLU X v∈V X r∈R Pr (u, v) ×  W l rhl v + bl r ! +hl u, where Pr(u, v) represents the edge weight (the probability of word u to word v under relation r). Wr and br means the GCN weight under relation r. V includes all words and R contains all relations. Note that the complete bi-GCN also takes both the incoming and outgoing situations into account. The bi-GCN in 2nd-phase further considers the relation-weighted propagations and extracts more sufficient features for each word. With the newer word features from 2nd-phase, we perform named entity and relation classification again for more robust relation prediction. The losses for these are denoted as eloss2p and rloss2p. 4.3 Training Detail We use two kinds of loss in GraphRel: entity loss and relation loss, both of which belong to categorical loss. For entity loss, we use the conventional (Begin, Inside, End, Single, Out) tagging for the ground-truth labels. Each word belongs to one of the five classes. The ground-truth entity label for eloss1p and eloss2p are the same; we use cross-entropy as the categorical loss function during training. For relation loss, we feed in a one-hot relation vector as the ground truth of Pr(w1, w2) for each word pair (w1, w2). Since we predict relations based on word pairs, the ground truth should likewise be based on word pairs. That is, word United has a HasPresident relation to both word Barack and word Obama, as does word States. We believe that this word-pair-based relation representation provides GraphRel with the information it needs to learn to extract relations. The groundtruth relation vector for rloss1p and rloss2p are the same. As entity loss, we also use cross-entropy as the categorical loss function during training. For both eloss and rloss, we add an additional double-weight for those in-class entity or relation terms. Finally, the total loss is calculated as the sum of all entity loss and relation loss: lossall = (eloss1p + rloss1p) + α (eloss2p + rloss2p) , where α is a weight between loss of 1st-phase and 2nd-phase. We minimize lossall and train the whole GraphRel in an end-to-end manner. 4.4 Inference During inference, the baseline prediction method is head prediction, where a relation triplet such as (BarackObama, PresidentOf, UnitedStates) is extracted if and only if BarackObama, UnitedStates are both identified as entity mentions and PresidentOf is the most probable class of P(Obama,States). Another baseline extraction method that might be more stable is average prediction, where all word pairs between an entity mention pair are taken into account and decide a relation with maximum averaged probability. Finally, we propose a threshold prediction method, where all word pairs of an entity mention pair are still taken into account but in an independent fashion. For example, if 2 of the 4 distributions have PresidentOf as the most probable class, then the triplet (BarackObama, PresidentOf, UnitedStates) is extracted only if 2/4 = 50% > θ where θ is a free threshold parameter. This way, users can select their preferred precision and recall trade-off by adjusting θ. In the experiments, if not specified, threshold inference with θ = 0 is used. 5 Experiments In this section, we present the experimental results of the proposed GraphRel. We first describe implementation details, the datasets, and the baselines we compare with. Then we show the quantitative results for two datasets, conduct detailed analyses, and different categories of named entities. Finally, we demonstrate the improved effect of 2nd-phase via a case study. 5.1 Experimental Settings In our implementation, we chose the pre-trained GloVe (300d) as a fixed word embedding. The word embedding was then concatenated with a trainable POS embedding (15d) as the final input embedding for each word. The POS tag for each word and the dependency tree for whole sentences was retrieved from spaCy (Honnibal and Johnson, 2015). We use bi-LSTM with 256 units and 2-layer bi-GCN with 256 feature size in 1st-phase. For the 2nd-phase, the relation-weighted bi-GCN is 1-layer, also with a feature size of 256. During training, we set the LSTM dropout rate to 0.5, the learning rate to 0.0008, and the loss weight α to 3. We train GraphRel using the Adam (Kingma 1414 Method NYT WebNLG Precision Recall F1 Precision Recall F1 NovelTagging 62.4% 31.7% 42.0% 52.5% 19.3% 28.3% OneDecoder 59.4% 53.1% 56.0% 32.2% 28.9% 30.5% MultiDecoder 61.0% 56.6% 58.7% 37.7% 36.4% 37.1% GraphRel1p 62.9% 57.3% 60.0% 42.3% 39.2% 40.7% GraphRel2p 63.9% 60.0% 61.9% 44.7% 41.1% 42.9% Table 1: Results for both NYT and WebNLG datasets. Category NYT WebNLG Train Test Train Test Normal 37013 3266 1596 246 EPO 9782 978 227 26 SEO 14735 1297 3406 457 All 56195 5000 5019 703 #Relation 24 246 Table 2: Statistics of dataset. and Ba, 2015) optimizer and implement it under PyTorch. 5.1.1 Dataset We use the NYT (Riedel et al., 2010) and WebNLG (Gardent et al., 2017) datasets to evaluate the proposed method. As NovelTagging and MultiDecoder, for NYT, we filter sentences with more than 100 words and for WebNLG, we use only the first sentence in each instance in our experiments. The statistics of NYT and WebNLG is described in Table. 2. We divided relation triplets into three categories: Normal, EntityPairOverlap (EPO), and SingleEntityOverlap (SEO). The counts for each category are also shown in Table 2. Since one entity belonged to several different relations, EntityPairOverlap and SingleEntityOverlap were more difficult tasks. We discuss the result for different categories in the detailed analysis. 5.2 Baseline and Evaluation Metrics We compared GraphRel with two baselines: NovelTagging (Zheng et al., 2017) and MultiDecoder (Zeng et al., 2018). NovelTagging is a sequence tagger which predicts both entity and relation classes for each sentence word. MultiDecoder is a state-of-the-art method that considers relation extraction as a seq-seq problem and uses dynamic decoders to extract relation triplets. The results for both baselines come directly from the original papers. As two baselines, we adopted the standard F1 score to evaluate the results. The predicted triplets were seen as correct if and only if the relation and the head of the two corresponding entities were the same as the ground truth. 5.3 Quantitative Results Table 1 presents the precision, recall, and F1 score of NovelTagging, MultiDecoder, and GraphRel for both the NYT and WebNLG datasets. OneDecoder, proposed in MultiDecoder’s original paper, uses only a single decoder to extract relation triplets. GraphRel1p is the proposed method but only 1st-phase, and GraphRel2p is the complete version, which predicts relations and entities after the 2nd-phase. For the NYT dataset, we see that GraphRel1-hop outperforms NovelTagging by 18.0%, OneDecoder by 4.0%, and MultiDecoder by 1.3% in terms of F1. As it acquires both sequential and regional dependency word features, GraphRel1-hop performs better on both precision and recall, resulting in a higher F1 score. With relationweighted GCN in 2nd-phase, GraphRel2p, which considers interaction between name entities and relations, further surpasses MultiDecoder by 3.2% and yields a 1.9% improvement in comparison with GraphRel1p. Similar results can be found on the WebNLG dataset: GraphRel1p outperforms baseline F1 scores by 3.6%, and GraphRel2p further improves 2.2% upon GraphRel1p. From the NYT and WebNLG results, we show that GCN’s regional dependency feature and 2nd-phase prediction both aid relation prediction in terms of precision, recall, and F1 score. NovelTagging and MultiDecoder both use a sequential architecture. As NovelTagging assumes that an entity belongs to a single relation, precision is high but recall is low. MultiDecoder uses a 1415 Figure 4: Results (F1 score) by named entity category. Figure 5: Results (F1 score) by sentence triplet count. dynamic decoder to generate relation triplets. Because of the innate restrictions on RNN unrolling, the number of triplets it can generate is limited. However, for GraphRel, as we predict relations for each word pair, we are free of that restriction. We believe that GraphRel is the most balanced method as it maintains both high precision and high recall, yielding higher F1 scores. 5.4 Detailed Analysis To further analyze the proposed GraphRel, we present the results under different types of triplets, different inference methods, the improvement over name entity recognition, and different numbers of GCN layer used. 5.4.1 Different Types of Triplets We first investigate the results under different entity categories. Fig. 4 presents the results for both NYT and WebNLG datasets. For GraphRel, as we predict relations for all word pairs, all words can have relations with other words: thus entity overlap is not a problem. Though MultiDecoder tries to use a dynamic decoder, the result shows that GraphRel surpasses them in all entity categories. For instance, on the WebNLG dataset, GraphRel1p outperforms MultiDecoder by 3.5% on the normal class, 2.9% on the EPO class, and 3.4% on the SEO class. And GraphRel2p further improves GraphRel1p for each class. We also compare the results given different numbers of triplets in a sentence, as illustrated as Fig. 5. The x-axis represents 1, 2, 3, 4, or more than 5 triplets in a sentence. Because of the single decoder, OneDecoder performs well for single triplets, but performance drops drastically for more triplets in a sentence. As with the experiment for different entity categories, GraphRel1p and GraphRel2p both outperform the baselines under all numbers of triplets in a sentence. GraphRel1p outperforms MultiDecoder by 7.5% for more than 5 triplets in a sentence and GraphRel2p further surpasses MultiDecoder by 11.1% on NYT. 5.4.2 Inference Methods We compare the two baseline inference methods, head and average, and the threshold method under 1416 Sentence GraphRel1p GrapRel2p Agra Airport is in India where (Agra Airport, location, India) (Agra Airport, location, India) one of its leaders is Thakur. (India, leader name, Thakur) (India, leader name, Thakur) In Italy, the capital is Rome and (Italy, captical, Rome) (Italy, captical, Rome) A.S. Gubbio 1910 is located there. (A.S. Gubbio 1910, ground, Italy) Asam pedas (aka Asam padeh) is (Asam pedas, alias, Asam padeh) (Asam pedas, alias, Asam padeh) from the Sumatra and Malay (Asam pedas, region, Malay Peninsula) (Asam pedas, region, Malay Peninsula) Peninsula regions of Malaysia. (Asam pedas, country, Malaysia) (Asam padeh, region, Malay Peninsula) (Asam pedas, country, Malaysia) (Asam padeh, country, Malaysia) Table 3: Case Study for Graph1p and GraphRel2p. Figure 6: Results by different decision thresholds. Method NYT WebNLG GraphRel1p 88.8% 89.1% GraphRel2p 89.2% 91.9% Table 4: F1 score of entity recognition for GraphRel. different θ. Fig. 6 shows their results when applied to GraphRel2p on NYT and WebNLG. It can be seen that the threshold inference method efficaciously adjusts the trade-off between precision and recall with different choices of θ. By reducing the threshold from θ = 0.8 to θ = 0, the recall is significantly increased by 1.8% and 1.4% respectively on NYT and WebNLG, with only a marginal 0.6% loss of precision. The effectiveness of the proposed threshold method then leads to the best performance on both datasets, surpassing both the head and average ones. 5.4.3 Improvement over Entity Recognition and Different Numbers of GCN Layer From Table. 4, GraphRel2p can surpass 1st-phase by 0.4% and 2.8% for entity recognition on both NYT and WebNLG. It also shows that 2nd-phase relation-weighted GCN is effective on not only relation extraction but also name entity recognition. To confirm that our 2-layer 1st-phase added 1†#GCN layer in 1st-phase set to 2. ‡#GCN layer in 1st-phase and 2nd-phase set to 2 and 1. Phase #GCN layer NYT WebNLG 1st-phase 2 60.0% 40.7% 3 60.0% 40.5% 2nd-phase† 1 61.9% 42.9% 2 61.6% 42.4% 3rd-phase‡ 1 61.8% 42.7% Table 5: F1 score by different numbers of GCN layer. layer 2nd-phase is the best setting, we investigate the result of different numbers of GCN layer used in both 1st-phase and 2nd-phase. Table. 5 presents the results of using 3 GCN layers for 1st-phase and 2 layers of relation-weighted GCN for 2nd-phase. However, it shows that more GCN layers can not bring out better prediction and our (2, 1) layer setting should be the most suitable one for relation extraction task. We also experiment on 3rd-phase method, adopting relation-weighted GCN again where the graph is based on 2nd-phase’s predicited relations. And it shows that our 2nd-phase is sufficient enough for relation extraction. 5.5 Case Study Table. 3 shows the case study of our proposed GraphRel. The first sentence is an easy case and both GraphRel1p and GraphRel2p can extract accurately. For the second case, although there does not belong to name entity, it should contain the hidden semantic of Italy. Therefore, 2nd-phase can further predict that A.S. Gubbio 1910 grounds in Italy. The third case is an SEO class in which GraphRel1p discovers that Asam pedas is the same as Asam padeh, thus the latter should also locate in Malay Peninsula and come from Malaysia. 6 Conclusion In this paper, we present GraphRel, an end-toend relation extraction model which jointly learns 1417 named entities and relations based on graph convolutional networks (GCN). We combine RNN and GCN to extract not only sequential features but also regional dependency features for each word. Implicit features among all word pairs of the text are also considered in our approach. We predict relations for each word pair, solving the problem of entity overlapping. Furthermore, we introduce a novel relation-weighted GCN that considers interactions between named entities and relations. We evaluate the proposed method on the NYT and WebNLG datasets. The results show that our method outperforms previous work by 3.2% and 5.8% and achieves a new state-of-the-art for relation extraction. References A. Cetoli, S. Bragaglia, A.D. O’Harney, and M. Sloan. 2016. Graph convolutional networks for named entity recognition. In Proceedings of TLT. Yee-Seng Chan and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extraction. In Proceedings of ACL. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating training corpora for nlg micro-planners. In Proceedings of ACL. Sepp Hochreiter and Jrgen Schmidhuber. 1997. Long short-term memory. Neural Computation. Matthew Honnibal and Mark Johnson. 2015. An improved non-monotonic transition system for dependency parsing. In Proceedings of EMNLP. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR. Thomas Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In Proceedings of ICLR. Jiwei Li, Minh-Thang Luong, Dan Jurafsky, and Eudard Hovy. 2015. When are tree structures necessary for deep learning of representations? In Proceedings of EMNLP. Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of ACL. Bang Liu, Ting Zhang, Di Niu, Jinghong Lin, Kunfeng Lai, and Yu Xu. 2018. Matching long text documents via graph convolutional networks. arXiv preprint arXiv:1802.07459. Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of NAACL. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of EMNLP. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of ACL. Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of EMNLP. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen tau Yih. 2017. Cross-sentence n-ary relation extraction with graph lstms. In Proceedings of ACL. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP. Yujie Qian, Enrico Santus, Zhijing Jin, Jiang Guo, and Regina Barzilay. 2019. Graphie: A graph-based framework for information extraction. In Proceedings of NAACL. Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, Tarek F. Abdelzaher, and Jiawei Han. 2017. Cotype: Joint extraction of typed entities and relations with knowledge bases. In Proceedings of WWW. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of ECML-PKDD. Cicero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of ACL. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of EMNLP-CoNLL. Xu Yan, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of EMNLP. Xiaofeng Yu and Wai Lam. 2010. Jointly identifying entities and extracting relations in encyclopedia text via a graphical model approach. In Proceedings of COLING. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Journal of Machine Learning Research (JMLR). 1418 Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING. Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018. Extracting relational facts by an end-to-end neural model with copy mechanism. In Proceedings of ACL. Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of EMNLP. Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extraction of entities and relations based on a novel tagging scheme. In Proceedings of ACL. GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proceedings of ACL.
2019
136
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1419–1429 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1419 DIAG-NRE: A Neural Pattern Diagnosis Framework for Distantly Supervised Neural Relation Extraction Shun Zheng1 Xu Han2 Yankai Lin2 Peilin Yu3 Lu Chen1 Ling Huang1,4 Zhiyuan Liu2 Wei Xu1 1 Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China 2 Department of Computer Science and Technology, Tsinghua University, Beijing, China 3 Department of Computer Sciences, University of Wisconsin-Madison, Madison, USA 4 AHI Fintech Inc., Beijing, China {zhengs14,hanxu17,linyk14,lchen17}@mails.tsinghua.edu.cn; [email protected]; [email protected]; {liuzy,weixu}@tsinghua.edu.cn; Abstract Pattern-based labeling methods have achieved promising results in alleviating the inevitable labeling noises of distantly supervised neural relation extraction. However, these methods require significant expert labor to write relation-specific patterns, which makes them too sophisticated to generalize quickly. To ease the labor-intensive workload of pattern writing and enable the quick generalization to new relation types, we propose a neural pattern diagnosis framework, DIAG-NRE, that can automatically summarize and refine highquality relational patterns from noise data with human experts in the loop. To demonstrate the effectiveness of DIAG-NRE, we apply it to two real-world datasets and present both significant and interpretable improvements over state-of-the-art methods. Source codes and data can be found at https://github. com/thunlp/DIAG-NRE. 1 Introduction Relation extraction aims to extract relational facts from the plain text and can benefit downstream knowledge-driven applications. A relational fact is defined as a relation between a head entity and a tail entity, e.g., (Letizia Moratti, Birthplace, Milan). The conventional methods often regard relation extraction as a supervised classification task that predicts the relation type between two detected entities mentioned in a sentence, including both statistical models (Zelenko et al., 2003; Zhou et al., 2005) and neural models (Zeng et al., 2014; dos Santos et al., 2015). These supervised models require a large number of human-annotated data to train, which are both expensive and time-consuming to collect. Therefore, Craven et al. (1999); Mintz et al. (2009) proposed distant supervision (DS) to automatically generate large-scale training data for relation exKnowledge Base Head Entity Tail Entity Relation Letizia Moratti Milan Birthplace Training Data for “Birthplace” Relation Sentence DS Label Ground Truth Error Type Marjorie_Kellogg was born in Santa_Barbara . 0 1 FN Mayor Letizia_Moratti of Milan disdainfully dismissed it . 1 0 FP Distant Supervision (DS) Figure 1: Two types of error labels, false negatives (FN) and false positives (FP), caused by DS. traction, by aligning relational facts from a knowledge base (KB) to plain text and assuming that every sentence mentioning two entities can describe their relationships in the KB. As DS can acquire large-scale data without human annotation, it has been widely adopted by recent neural relation extraction (NRE) models (Zeng et al., 2015; Lin et al., 2016). Although DS is both simple and effective in many cases, it inevitably introduces intolerable labeling noises. As Figure 1 shows, there are two types of error labels, false negatives and false positives. The reason for false negatives is that a sentence does describe two entities about a target relation, but the fact has not been covered by the KB yet. While for false positives, it is because not all sentences mentioning entity pairs actually express their relations in the KB. The noisy-labeling problem can become severe when the KB and text do not match well and as a result heavily weaken the model performance (Riedel et al., 2010). Recent research has realized that introducing appropriate human efforts is essential for reducing such labeling noises. For example, Zhang et al. (2012); Pershina et al. (2014); Angeli et al. (2014); Liu et al. (2016) mixed a small set of crowd-annotated labels with purely DS-generated noise labels. However, they found that only sufficiently large and high-quality human labels can bring notable improvements, because there are 1420 significantly larger number of noise labels. To enlarge the impact of human efforts, Ratner et al. (2016); Liu et al. (2017a) proposed to incorporate pattern-based labeling, where the key idea was to regard both DS and pattern-based heuristics as the weak supervision sources and develop a weak-label-fusion (WLF) model to produce denoised labels. However, the major limitation of the WLF paradigm lies in the requirement of human experts to write relation-specific patterns. Unfortunately, writing good patterns is both a highskill and labor-intensive task that requires experts to learn detailed pattern-composing instructions, examine adequate examples, tune patterns for different corner cases, etc. For example, the spouse relation example of Ratner et al. (2016) uses 11 functions with over 20 relation-specific keywords1. Even worse, when generalizing to a new relation type, we need to repeat the hard manual operations mentioned above again. To ease the pattern-writing work of human experts and enable the quick generalization to new relation types, we propose a neural pattern diagnosis framework, DIAG-NRE, which establishes a bridge between DS and WLF, for common NRE models. The general workflow of DIAG-NRE, as Figure 2 shows, contains two key stages: 1) pattern extraction, extracting potential patterns from NRE models by employing reinforcement learning (RL), and 2) pattern refinement, asking human experts to annotate a small set of actively selected examples. Following these steps, we not only minimize the workload and difficulty of human experts by generating patterns automatically, but also enable the quick generalization by only requiring a small number of human annotations. After the processing of DIAG-NRE, we obtain highquality patterns that are either supportive or unsupportive of the target relation with high probabilities and can feed them into the WLF stage to get denoised labels and retrain a better model. To demonstrate the effectiveness of DIAG-NRE, we conduct extensive experiments on two real-world datasets, where DIAG-NRE not only achieves significant improvements over state-of-the-art methods but also provides insightful diagnostic results for different noise behaviors via refined patterns. In summary, DIAG-NRE has the following contributions: 1https://github.com/HazyResearch/ snorkel/tree/master/tutorials/intro DIAG-NRE Pattern Refinement High-quality Patterns Induced Patterns Weak Label Fusion (WLF) Pattern Extraction Distant Supervision (DS) & Data NRE Model Denoised Labels DS Labels Figure 2: An overview of DIAG-NRE. • easing the pattern-writing work of human experts by generating patterns automatically; • enabling the quick generalization to new relation types by only requiring a small number of human annotations; • presenting both significant and interpretable performance improvements as well as intuitive diagnostic analyses. Particularly, for one relation with severe false negative noises, we improve the F1 score by about 0.4. To the best of our knowledge, we are the first to explicitly reveal and address this severe noise problem for that dataset. 2 Related Work To reduce labeling noises of DS, earlier work attempted to design specific model architectures that can better tolerate labeling noises, such as the multi-instance learning paradigm (Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012; Zeng et al., 2015; Lin et al., 2016; Wu et al., 2017). These models relax the raw assumption of DS by grouping multiple sentences that mention the same entity pair together as a bag and then assuming that at least one sentence in this bag expresses the relation. This weaker assumption can alleviate the noisy-labeling problem to some extent, but this problem still exists at the bag level, and Feng et al. (2018) discovered that bag-level models struggled to do sentence-level predictions. Later work tried to design a dynamic labeladjustment strategy for training (Liu et al., 2017b; Luo et al., 2017). Especially, the most recent work (Feng et al., 2018; Qin et al., 2018) adopted RL to train an agent that interacts with the NRE model to learn how to remove or alter noise labels. These methods work without human intervention by utilizing the consistency and difference between DS-generated labels and model-predicted ones. However, such methods can neither discover 1421 noise labels that coincide with the model predictions nor explain the reasons for removed or altered labels. As discussed in the introduction, introducing human efforts is a promising direction to contribute both significant and interpretable improvements, which is also the focus of this paper. As for the pattern-extraction part, we note that there are some methods with similar insights but different purposes. For example, Zhang et al. (2018) improved the performance of the vanilla LSTM (Hochreiter and Schmidhuber, 1997) by utilizing RL to discover structured representations and Li et al. (2016) interpreted the sentiment prediction of neural models by employing RL to find the decision-changing phrases. However, NRE models are unique because we only care about the semantic inter-entity relation mentioned in the sentence. To the best of our knowledge, we are the first to extract patterns from NRE models by RL. We also note that the relational-pattern mining has been extensively studied (Califf and Mooney, 1999; Carlson et al., 2010; Nakashole et al., 2012; Jiang et al., 2017). Different from those studies, our pattern-extraction method 1) is simply based on RL, 2) does not rely on any lexical or syntactic annotation, and 3) can be aware of the pattern importance via the prediction of NRE models. Besides, Takamatsu et al. (2012) inferred negative syntactic patterns via the example-pattern-relation co-occurrence and removed the false-positive labels accordingly. In contrast, built upon modern neural models, our method not only reduces negative patterns to alleviate false positives but also reinforces positive patterns to address false negatives at the same time. 3 Methodology Provided with DS-generated data and NRE models trained on them, DIAG-NRE can generate high-quality patterns for the WLF stage to produce denoised labels. As Figure 2 shows, DIAG-NRE contains two key stages in general: pattern extraction (Section 3.2) and pattern refinement (Section 3.3). Moreover, we briefly introduce the WLF paradigm in Section 3.4 for completeness. Next, we start with reviewing the common input-output schema of modern NRE models. Reward NRE Model r Encoder Input Representation Agent w1 p1 p2 w2 wT pT x ˆw1 p1 ˆw2 p2 ˆwT pT ˆx State a1 a2 aT a Agent Network Action New State PER Entities CITY 0 0 Actions 0 1 1 0 . Berlin in born was Joachim_Fest Tokens Pattern-induction Example Pattern ENTITY1:PER PAD{1,3} born in ENTITY2:CITY Figure 3: The RL-based pattern-extraction workflow and a typical pattern-induction example, where we induce a pattern for the Birthplace relation via a series of actions (0: retaining, 1: erasing). 3.1 NRE Models Given an instance s with T tokens2, a common input representation of NRE models is x = [x1, x2, · · · , xT ], where xi ∈Rdx denotes the embedding of token i and dx is the token embedding size. Particularly, xi is the concatenation of the word embedding, wi ∈Rdx, and position embedding, pi ∈Rdp, as [wi; pi], to be aware of both semantics and entity positions, where dx = dw + dp. Given the relation type r, NRE models perform different types of tensor manipulations on x and obtain the predicting probability of r given the instance s as Pφ(r|x), where φ denotes model parameters except for the input embedding tables. 3.2 Pattern Extraction In this stage, we build a pattern-extraction agent to distill relation-specific patterns from NRE models with the aforementioned input-output schema. The basic idea is to erase irrelevant tokens and preserve the raw target prediction simultaneously, which can be modeled as a token-erasing decision process and optimized by RL. Figure 3 shows this RL-based workflow in a general way together with an intuitive pattern-induction example. Next, we elaborate details of this workflow. Action. The agent takes an action ai, retaining (0) or erasing (1), for each token of the instance s and transforms the input representation from x into ˆx. During this process, the column i of x, 2In this paper, we refer to a sentence together with an entity pair as an instance and omit the instance index for brevity. 1422 xi = [wi; pi], corresponding to the token i of raw instance s, is transformed into ˆxi = [ ˆwi; pi], where the position vectors are left untouched and the new word vector ˆwi is adjusted based on the action taken by the agent. For the retaining action, we retain the raw word vector as ˆwi = wi. While for erasing, we set ˆwi to be all zeros to remove the semantic meaning. After taking a sequence of actions, a = [a1; a2; · · · ; aT ], we get the transformed representation ˆx with ˆT tokens retained. Reward. Our purpose is to find the most simplified sequence that preserves the raw prediction confidence. Therefore, given the raw input representation x and the corresponding action vector a, we define the reward as follows: R(a|x) = log Pφ(r|ˆx) Pφ(r|x)  | {z } Prediction Confidence +η · (1 −ˆT/T) | {z } Sparsity , where the total reward is composed of two parts: one is the log-likelihood term to pursue the high prediction confidence and the other is the sparse ratio term to induce sparsity in terms of retained tokens. We balance these two parts through a hyper-parameter η. State. To be general, the state provided to the agent should be independent of NRE architectures. Moreover, the state needs to incorporate complete information of the current instance. Therefore, in our design, the agent directly employs the input representation x as the state. Agent. We employ policy-based RL to train a neural-network-based agent that can predict a sequence of actions for an instance to maximize the reward. Our agent network directly estimates πΘ(a|x) = QT i=1 πΘ(ai|x) in a nonautoregressive manner by calculating πΘ(ai|x) in parallel, where Θ denotes the parameters of the agent network. To enrich the contextual information when deciding the action for each token, we employ the forward and backward LSTM networks to encode x into h as −→ h = [−→h 1, −→h 2, · · · , −→h T ] = Forward-LSTM(x), ←− h = [←−h 1, ←−h 2, · · · , ←−h T ] = Backward-LSTM(x), h = [h1, h2, · · · , hT ] = Concatenate(−→ h , ←− h ), where −→h i ∈Rdh, ←−h i ∈Rdh, hi = [−→h i; ←−h i] ∈ R2×dh, and dh denotes the size of LSTM’s hidden state. Then, we employ an attention-based strategy (Bahdanau et al., 2015) to aggregate the contextual information as c = [c1, c2, · · · , cT ]. For each token i, we compute the context vector ci ∈R2dh as follows: ci = T X j=1 αi jhj, where each scalar weight αi j is calculated by ei j/(PT k=1 ei k). Here ei j is computed by a small network as ei j = v⊤ α tanh(Wxxi + Whhj), where Wx ∈R2dh×dx, Wh ∈R2dh×2dh and vα ∈ R2dh are network parameters. Next, we compute the final representation to infer actions as z = [z1, z2, · · · , zT ], where for each token i, zi = [xi; ci] ∈Rdx+2dh incorporates semantic, positional and contextual information. Finally, we estimate the probability of taking action ai for token i as πΘ(ai|x) = oai i · (1 −oi)(1−ai), where oi = sigmoid(W ⊤ o zi + bo), Wo ∈Rdx+2dh and bo ∈R1 are network parameters. Optimization. We employ the REINFORCE algorithm (Williams, 1992) and policy gradient methods (Sutton et al., 2000) to optimize parameters of the agent network, where the key step is to rewrite the gradient formulation and then apply the back-propagation algorithm (Rumelhart et al., 1986) to update network parameters. Specifically, we define our objective as: L(Θ) = Es  EπΘ(a|x)R(a|x)  , where x denotes the input representation of the instance s. By taking the derivative of J(Θ) with respect to Θ, we can obtain the gradient ∇ΘL(Θ) as Es[EπΘ(a|x)[R(a|x)∇Θ log πΘ(a|x)]]. Besides, we utilize the ϵ-greedy trick to balance exploration and exploitation. Pattern Induction. Given instances and corresponding agent actions, we take the following steps to induce compact patterns. First, to be general, we substitute raw entity pairs with corresponding entity types. Then, we evaluate the agent to obtain retained tokens with the relative distance preserved. To enable the generalized position indication, we divide the relative distance between two adjacent retained tokens into four categories: zero (no tokens between them), short (1-3 tokens), 1423 Human Annotation Ground Truth Instances ✔ Instance P1-2 ✔ Instance P1-1 ✖ Instance P2-1 Relation Type r Pattern Hierarchy Pattern 2 Pattern 2.1 Pattern 2.1.1 Pattern 1 Pattern 1.1 Pattern 1.2 Get Pattern-matched Instances Figure 4: The human-in-the-loop pattern refinement. medium (4-9 tokens) and long (10 or more tokens) distance. For instance, Figure 3 shows a typical pattern-induction example. Patterns with such formats can incorporate multiple kinds of crucial information, such as entity types, key tokens and the relative distance among them. 3.3 Pattern Refinement The above pattern-extraction stage operates at the instance level by producing a pattern for each evaluated instance. However, after aggregating available patterns at the dataset level, there inevitably exist redundant ones. Therefore, we design a pattern hierarchy to merge redundant patterns. Afterward, we can introduce human experts into the workflow by asking them to annotate a small number of actively selected examples. Figure 4 shows the general workflow of this stage. Pattern Hierarchy. To identify redundant patterns, we group multiple instances with the same pattern and build a pattern hierarchy by the matching statistics. In this hierarchy, the parent pattern should cover all instances matched by the child pattern. As the parent pattern already has sufficient relation-supporting signals, we can omit child patterns for human annotation. Moreover, the number of instances from which the pattern can be induced is closely related to the pattern representativeness. Therefore, we follow the decreasing order of this number to select top nr most representative patterns for human annotation. Human Annotation. To quantitatively evaluate the pattern quality, we adopt an approximate method by randomly selecting na pattern-matched instances and annotating them manually. Thus, for each relation type, we end up with nr ∗na humanannotated instances. We assign patterns with the accuracy higher than ph and lower than pl into the positive pattern set and the negative pattern set, respectively, to serve the WLF stage. In practice, users can tune these hyper-parameters (nr, na, ph and pl) accordingly for different applications, such as increasing ph to prefer precision. While in this paper, to show the wide applicability and robustness of DIAG-NRE, we demonstrate that a single configuration can handle all 14 relation types. 3.4 Weak Label Fusion The WLF model aims to fuse weak labels from multiple labeling sources, including both DS and patterns, to produce denoised labels. In this paper, we adopt data programming (DP) (Ratner et al., 2016) at our WLF model. The input unit of DP is called labeling function (LF), which takes one instance and emits a label (+1: positive, -1: negative or 0: unknown). In our case, the LF of DS generates +1 or -1, LFs of positive patterns generate +1 or 0, and LFs of negative patterns generate -1 or 0. We estimate parameters of DP on the small set of human-annotated labels with a closed-form solution (see the appendix for detailed formulations). With the help of DP, we get denoised labels to retrain a better model. Note that designing better generic WLF models is still a hot research topic (Varma et al., 2016; Bach et al., 2017; Liu et al., 2017a) but outside the scope of this work, which is automatically generating patters to ease human’s work. 4 Experiments In this section, we present experimental results and comprehensive analyses to demonstrate the effectiveness of DIAG-NRE. 4.1 Experimental Setup Evaluation. To clearly show the different noise behaviours for various relation types, we treat each relation prediction task as a single binary classification problem, that is predicting the existing or not of that relation for a given instance. Different from previous studies, we report relation-specific metrics (Precision, Recall and F1 scores, all in the percentage format) and macro-averaged ones at the dataset level, because the distribution of relation types is extremely imbalanced and the microaveraged evaluation inevitably overlooks noisylabeling issues of many relation types. Moreover, we only utilize human-annotated test data to evaluate models trained on noise labels, as Ratner et al. (2016); Liu et al. (2016) did. The reason is that the 1424 TID Relation Abbreviation Train Test NYT R0 Bus./Company 5.3k 186 R1 Loc./Admin. Div. 4.9k 180 R2 Loc./Capital 5.3k 20 R3 Loc./Contains 44.6k 263 R4 Loc./Country 4.9k 89 R5 Loc./Neighbor. 5.6k 55 R6 Peo./National. 7.5k 84 R7 Peo./Place Lived 6.7k 230 R8 Peo./Birthplace 3.1k 16 R9 Peo./Deathplace 1.9k 19 UW Ru 6 Peo./National. 107k 1.8k Ru 7 Peo./Place Lived 20.9k 3.8k Ru 8 Peo./Birthplace 15.3k 458 Ru 9 Peo./Deathplace 5.7k 1.3k Table 1: The total 14 relation prediction tasks with corresponding task IDs (TIDs), relation abbreviations and the number of positive labels in the train and test sets. The train set, generated by DS, contains 452, 223 and 395, 738 instances for NYT and UW, respectively. The test set, annotated by the human, contains 1, 027 and 15, 622 instances for NYT and UW, respectively. severe labeling noises of many relation types heavily weaken the reliability of the DS-based heldout evaluation (Mintz et al., 2009), which cannot judge the performance accurately. Data & Tasks. We select top ten relation types with enough coverage (over 1, 000 instances) from the NYT dataset (Riedel et al., 2010)3 and all four relation types from the UW dataset (Liu et al., 2016)4. Originally, the NYT dataset contains a train set and a test set both generated by DS with 522, 611 and 172, 448 instances, respectively; the UW dataset contains a train set generated by DS, a crowd-annotated set and a minimal human-annotated test set with 676, 882, 18, 128 and 164 instances, respectively. To enable the reliable evaluation based on human annotations, for the NYT dataset, we randomly select up to 100 instances per relation (including the special unknown relation NA) from the test set and manually annotate them; while for the UW dataset, we directly utilize the crowd-annotated set (disjoint from the train set) with the broad coverage and very high quality as the ground truth. Table 1 summaries detailed statistics of these 14 tasks. Hyper-parameters. We implement DIAG-NRE based on Pytorch5 and directly utilize its default 3http://iesl.cs.umass.edu/riedel/ecml/ 4https://www.cs.washington.edu/ai/ gated_instructions/naacl_data.zip 5https://pytorch.org/ initialization for neural networks. For the NRE model, we adopt a simple yet effective LSTMbased architecture described in Zhou et al. (2016) and adopt widely-used hyper-parameters (see the appendix for details). As for DIAG-NRE, we use the following configuration for all 14 tasks. For the agent network, the LSTM hidden size is 200, the optimizer is Adam with a learning rate of 0.001, the batch size is 5, and the training epoch is 10. At the pattern-extraction stage, we use ϵ = 0.1 and alter η in {0.05, 0.1, 0.5, 1.0, 1.5} to train multiple agents that tend to squeeze patterns with different granularities and combine outputs of all agents to serve the pattern-refinement stage. To speed up the agent training, we use filtered instances by taking the top 10, 000 ones with the highest prediction probabilities. At the patternrefinement stage, hyper-parameters include nr = 20, na = 10, ph = 0.8 and pl = 0.1. Thus, for each task, we get 200 human-annotated instances (about 0.05% of the entire train set) and at most 20 patterns for the WLF stage. 4.2 Performance Comparisons Based on the above hyper-parameters, DIAG-NRE together with the WLF model can produce denoised labels to retrain a better NRE model. Next, we present the overall performance comparisons of NRE models trained with different labels. Baselines. We adopt the following baselines: 1) Distant Supervision, the vanilla DS described in Mintz et al. (2009), 2) Gold Label Mix (Liu et al., 2016), mixing human-annotated highquality labels with DS-generated noise labels, and 3) RLRE (Feng et al., 2018), building an instanceselection agent to select correct-labeled ones by only interacting with NRE models trained on noise labels. Specifically, for Gold Label Mix, we use the same 200 labels obtained at the patternrefinement stage as the high-quality labels. To focus on the impact of training labels produced with different methods, besides for fixing all hyperparameters exactly same, we run the NRE model with five random seeds, ranging from 0 to 4, for each case and present the averaged scores. Overall Results. Table 2 shows the overall results with precision (P.), recall (R.) and F1 scores. For a majority of tasks suffering large labeling noises, including R1, R4, R5, R8, R9 and Ru 8, we improve the F1 score by 5.0 over the best baseline. Notably, the F1 improvement for task R1 has 1425 TID Distant Supervision Gold Label Mix RLRE DIAG-NRE P. R. F1 P. R. F1 P. R. F1 P. R. F1 Inc-DS Inc-Best R0 95.1 41.5 57.8 95.7 40.8 57.2 97.7 32.4 48.6 95.7 42.8 59.1 +1.4 +1.4 R1 91.9 9.1 16.4 90.2 11.7 20.2 92.6 4.2 8.0 94.5 44.8 60.7 +44.3 +40.4 R2 37.0 83.0 50.8 40.0 85.0 54.0 64.8 68.0 66.1 42.4 85.0 56.0 +5.2 -10.1 R3 87.5 79.2 83.2 87.1 80.2 83.5 87.5 79.2 83.2 87.0 79.8 83.2 +0.0 -0.3 R4 95.3 50.1 64.7 94.1 49.0 63.9 98.2 47.9 64.0 94.5 57.5 71.5 +6.7 +6.7 R5 82.7 29.1 42.9 84.7 29.5 43.6 82.7 29.1 42.9 84.5 37.5 51.8 +8.9 +8.3 R6 82.0 83.8 82.8 81.6 84.0 82.7 82.0 83.8 82.8 81.5 83.3 82.3 -0.5 -0.5 R7 82.3 22.3 35.1 82.0 22.6 35.4 83.5 21.8 34.5 82.0 25.6 39.0 +3.8 +3.6 R8 66.2 32.5 39.8 70.5 47.5 55.8 66.2 32.5 39.8 73.4 61.3 65.5 +25.7 +9.7 R9 85.4 73.7 77.9 85.9 80.0 81.5 85.4 73.7 77.9 89.0 87.4 87.1 +9.2 +5.6 Avg. 80.5 50.4 55.1 81.2 53.0 57.8 84.1 47.3 54.8 82.5 60.5 65.6 +10.5 +6.5 Ru 6 35.9 75.7 48.7 35.8 75.0 48.5 36.0 75.3 48.7 36.2 74.5 48.7 +0.0 -0.0 Ru 7 57.8 18.5 28.0 59.3 19.1 28.8 57.8 18.5 28.0 56.3 23.5 33.1 +5.1 +4.3 Ru 8 37.3 64.0 46.9 40.0 64.9 49.1 37.3 64.0 46.9 48.1 71.9 57.5 +10.6 +8.3 Ru 9 77.1 71.3 74.0 77.5 70.3 73.5 77.1 71.3 74.0 80.7 71.1 75.4 +1.5 +1.5 Avg. 52.0 57.4 49.4 53.1 57.3 50.0 52.0 57.3 49.4 55.3 60.2 53.7 +4.3 +3.5 Table 2: Overall results for 14 tasks, where we present relation-specific scores, the macro-averaged ones (Avg.), the F1 improvement of DIAG-NRE over the vanilla DS (Inc-DS) and the best baseline (Inc-Best), and we highlight the best F1 for each task and the significant improvements. TID Prec. Recall Acc. #Pos. #Neg. R0 100.0 81.8 82.0 20 0 R1 93.9 33.5 36.2 18 0 R2 75.7 88.0 76.5 9 5 R3 100.0 91.4 92.0 20 0 R4 93.3 72.4 80.9 10 2 R5 93.8 77.3 86.5 15 0 R6 88.3 76.9 75.1 14 0 R7 91.9 64.6 64.0 20 0 R8 29.3 30.4 60.0 4 10 R9 66.7 38.1 74.4 6 11 Ru 6 81.8 90.7 81.0 7 0 Ru 7 93.5 70.7 68.3 17 1 Ru 8 35.0 70.0 60.0 4 15 Ru 9 87.5 59.2 67.7 12 5 Table 3: Total diagnostic results, where columns contain the precision, recall and accuracy of DS-generated labels evaluated on 200 human-annotated labels as well as the number of positive and negative patterns preserved after the pattern-refinement stage, and we underline some cases in which DS performs poorly. reached 40. For some tasks with fewer noises, including R0, R7, Ru 7 and Ru 9, our method can obtain small improvements. For a few tasks, such as R3, R6 and Ru 6, only using DS is sufficient to train competitive models. In such cases, fusing other weak labels may have negative effects, but these side effects are small. The detailed reasons for these improvements will be elaborated together with the diagnostic results in Section 4.3. Another interesting observation is that RLRE yields the best result on tasks R2 and Ru 6 but gets worse results than the vanilla DS on tasks R0, R1, R4 and R7. Since the instance selector used in RLRE is difficult to be interpreted, we can hardly figure out the specific reason. We conjecture that this behavior is due to the gap between maximizing the likelihood of the NRE model and the ground-truth instance selection. In contrast, DIAG-NRE can contribute both stable and interpretable improvements with the help of human-readable patterns. 4.3 Pattern-based Diagnostic Results Besides for improving the extraction performance, DIAG-NRE can interpret different noise effects caused by DS via refined patterns, as Table 3 shows. Next, we elaborate these diagnostic results and the corresponding performance degradation of NRE models from two perspectives: false negatives (FN) and false positives (FP). FN. A typical example of FN is task R1 (Administrative Division), where the precision of DS-generated labels is fairly good but the recall is too low. The underlying reason is that the relational facts stored in the KB cover too few real facts actually contained by the corpus. This low-recall issue introduces too many negative instances with common relation-supporting patterns and thus confuses the NRE model in capturing correct features. This issue also explains results of R1 in Table 2 that the NRE model trained on DS-generated data achieves high precision but low recall, while DIAG-NRE with reinforced positive patterns can obtain significant im1426 TID Patterns & Matched Examples DS RLRE DIAG-NRE R1 Pos. Pattern: in ENTITY2:CITY PAD{1,3} ENTITY1:COUNTRY (DS Label: 382 / 2072) Example: He will , however , perform this month in Rotterdam , the Netherlands , and Prague . 0 None 0.81 R8 Pos. Pattern: ENTITY1:PER PAD{1,3} born PAD{1,3} ENTITY2:CITY (DS Label: 44 / 82) Example: Marjorie Kellogg was born in Santa Barbara . 0 0 1.0 Neg. Pattern: mayor ENTITY1:PER PAD{1,3} ENTITY2:CITY (DS Label: 21 / 62) Example: Mayor Letizia Moratti of Milan disdainfully dismissed it . 1 1 0.0 Ru 9 Pos. Pattern: ENTITY1:PER died PAD{4,9} ENTITY2:CITY (DS Label: 66 / 108) Example: Dahm died Thursday at an assisted living center in Huntsville ... 0 0 1.0 Neg. Pattern: ENTITY1:PER PAD{4,9} rally PAD{1,3} ENTITY2:CITY (DS Label: 40 / 87) Example: Bhutto vowed to hold a rally in Rawalpindi on Friday ... 1 1 0.0 Table 4: Positive (Pos.), negative (Neg.) patterns and associated examples with labels produced by different methods. For each pattern, we present “DS Label” as the number of DS-generated positive labels over the number of pattern-matched instances. For RLRE, None means the instance is removed. For DIAG-NRE, we present the soft label produced by the WLF model. provements due to much higher recall. For tasks R8 (Birthplace) and R9 (Deathplace), we observe the similar low-recall issues. FP. The FP errors are mainly caused by the assumption of DS described in the introduction. For example, the precision of DS-generated labels for tasks R8 and Ru 8 is too low. This low precision means that a large portion of DS-generated positive labels do not indicate the target relation. Thus, this issue inevitably causes the NRE model to absorb some irrelevant patterns. This explanation also corresponds to the fact that we have obtained some negative patterns. By reducing labels with FP errors through negative patterns, DIAG-NRE can achieve large precision improvements. For other tasks, DS-generated labels are relatively good, but the noise issue still exists, major or minor, except for task R3 (Contains), where labels automatically generated by DS are incredibly accurate. We conjecture the reason for such high-quality labeling is that for task R3, the DS assumption is consistent with the written language convention: when mentioning two locations with the containing relation in one sentence, people get used to declaring this relation explicitly. 4.4 Incremental Diagnosis In addition to the performance comparisons based on 200 human-annotated instances, we show the incremental diagnosis ability of DIAG-NRE by gradually increasing the number of human annotations from 10 to 200. As Figure 5 shows, where we pick those tasks (three from NYT and two from UW) suffering large labeling noises, most tasks experience a rapid improvement phase with the 10 50 100 150 200 # of human annotations 1 5 10 20 40 F1 improvements over DS R1 R8 R9 Ru 7 Ru 8 Figure 5: The F1 improvements of DIAG-NRE over DS with the increased number of human annotations. help of high-quality patterns automatically generated by DIAG-NRE and then enter a saturate phase where adding annotations does not contribute much. This saturation accords with the intuition that high-quality relational patterns are often limited. The only exception is task R9 that drops first and then increases again, the reason is that the fully automatic pattern refinement of DIAG-NRE produces one incorrect pattern accidentally, while later patterns alleviate this mistake. Actually, in practice, users can further curate patterns generated by DIAG-NRE to get even better results, which can also be much easier and quicker than writing patterns from scratch. 4.5 Case Studies Table 4 shows five pattern examples from three tasks. For task R1, the positive pattern can remedy the extremely low coverage caused by DS. For tasks R8 and Ru 9, besides for the help of the positive pattern, the negative pattern can correct many 1427 FP labels caused by DS. These cases intuitively illustrate the ability of DIAG-NRE to diagnose and denoise DS-generated labels. 5 Conclusion and Future Work In this paper, we propose a neural pattern diagnosis framework, DIAG-NRE, to diagnose and improve NRE models trained on DS-generated data. DIAG-NRE not only eases the hard patternwriting work of human experts by generating patterns automatically, but also enables the quick generalization to new relation types by only requiring a small number of human annotations. Coupled with the WLF model, DIAG-NRE can produce denoised labels to retrain a better NRE model. Extensive experiments with comprehensive analyses demonstrate that DIAG-NRE can contribute both significant and interpretable improvements. For the future work, we plan to extend DIAGNRE to other DS-based applications, such as question answering (Lin et al., 2018), event extraction (Chen et al., 2017), etc. Acknowledgements This work is supported in part by the National Natural Science Foundation of China (NSFC) Grant 61532001, 61572273, 61532010, Tsinghua Initiative Research Program Grant 20151080475, and gift funds from Ant Financial and Nanjing Turing AI Institute. Xu Han is also supported by 2018 Tencent Rhino-Bird Elite Training Program. References Gabor Angeli, Julie Tibshirani, Jean Wu, and Christopher D Manning. 2014. Combining distant and partial supervision for relation extraction. In EMNLP. Stephen H. Bach, Bryan He, Alexander Ratner, and Christopher R´e. 2017. Learning the structure of generative models withut labeled data. In ICML. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Mary Elaine Califf and Raymond J. Mooney. 1999. Relational learning of pattern-match rules for information extraction. In AAAI. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Toward an architecture for neverending language learning. In AAAI. Yubo Chen, Shulin Liu, Xiang Zhang, Kang Liu, and Jun Zhao. 2017. Automatically labeled data generation for large scale event extraction. In ACL. Mark Craven, Johan Kumlien, et al. 1999. Constructing biological knowledge bases by extracting information from text sources. In ISMB. Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xiaoyan Zhu. 2018. Reinforcement learning for relation classification from noisy data. In AAAI. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In ACL. Meng Jiang, Jingbo Shang, Taylor Cassidy, Xiang Ren, Lance M Kaplan, Timothy P Hanratty, and Jiawei Han. 2017. Metapad: meta pattern discovery from massive text corpora. In KDD. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220. Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In ACL. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In ACL. Angli Liu, Stephen Soderland, Jonathan Bragg, Christopher H Lin, Xiao Ling, and Daniel S Weld. 2016. Effective crowd annotation for relation extraction. In NAACL-HLT. Liyuan Liu, Xiang Ren, Qi Zhu, Shi Zhi, Huan Gui, Heng Ji, and Jiawei Han. 2017a. Heterogeneous supervision for relation extraction: a representation learning approach. In EMNLP. Tianyu Liu, Kexiang Wang, Baobao Chang, and Zhifang Sui. 2017b. A soft-label method for noisetolerant distantly supervised relation extraction. In EMNLP. Bingfeng Luo, Yansong Feng, Zheng Wang, Zhanxing Zhu, Songfang Huang, Rui Yan, and Dongyan Zhao. 2017. Learning with noise: enhance distantly supervised relation extraction with dynamic transition matrix. In ACL. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In ACL. 1428 Ndapandula Nakashole, Gerhard Weikum, and Fabian Suchanek. 2012. Patty: a taxonomy of relational patterns with semantic types. In EMNLP. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: global vectors for word representation. In EMNLP. Maria Pershina, Bonan Min, Wei Xu, and Ralph Grishman. 2014. Infusion of labeled data into distant supervision for relation extraction. In ACL. Pengda Qin, Weiran Xu, and William Yang Wang. 2018. Robust distant supervision relation extraction via deep reinforcement learning. In ACL. Alexander J Ratner, Christopher M De Sa, Sen Wu, Daniel Selsam, and Christopher R´e. 2016. Data programming: creating large training sets, quickly. In NIPS. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In ECML. David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1986. Learning representations by backpropagating errors. Nature. Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In ACL. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In EMNLP. Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In NIPS. Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2012. Reducing wrong labels in distant supervision for relation extraction. In ACL. Paroma Varma, Bryan He, Dan Iter, Peng Xu, Rose Yu, Christopher De Sa, and Christopher R´e. 2016. Socratic learning: augmenting generative models to incorporate latent subsets in training data. arXiv preprint arXiv:1610.08123. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning. Yi Wu, David Bamman, and Stuart Russell. 2017. Adversarial training for relation extraction. In EMNLP. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. JMLR. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In EMNLP. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, et al. 2014. Relation classification via convolutional deep neural network. In COLING. Ce Zhang, Feng Niu, Christopher R´e, and Jude Shavlik. 2012. Big data versus the crowd: looking for relationships in all the right places. In ACL. Tianyang Zhang, Minlie Huang, and Li Zhao. 2018. Learning structured representation for text classification via reinforcement learning. In AAAI. Guodong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In ACL. Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attentionbased bidirectional long short-term memory networks for relation classification. In ACL. 1429 A Appendices In the appendices, we introduce formulation details of the weak-label-fusion (WLF) model and the hyper-parameters for our neural relation extraction (NRE) model. A.1 Weak Label Fusion As mentioned in the main body, we employ the data programming (DP) (Ratner et al., 2016) as our WLF model. DP proposed an abstraction of the weak label generator, named as the labeling function (LF), which can incorporate both DS and pattern-based heuristics. Typically, for a binary classification task, an LF is supposed to produce one label (+1: positive, -1: negative or 0: unknown) for each input instance. In our case, the LF of DS generates +1 or -1, LFs of positive patterns generate +1 or 0, and LFs of negative patterns generate -1 or 0. Given m labeling functions, we can write the joint probability of weak labels Ls and the true label Y s ∈{−1, +1} for instance s, Pα,β(Ls, Y s), as 1 2 m Y i=1 (βiαi1{Ls i =Y s} + βi(1 −αi)1{Ls i =−Y s} + (1 −βi)1{Ls i =0}), where each Ls i ∈{−1, 0, +1} denotes the weak label generated for instance s by the ith labeling function, and α and β are model parameters to be estimated. Originally, Ratner et al. (2016) conducted the unsupervised parameter estimation based on unlabeled data by solving max α,β X s∈S log X Y s Pα,β (Ls, Y s)) ! . Different from the general DP that treats each LF with the equal prior, we have strong priors that patterns produced by DIAG-NRE are either supportive or unsupportive of the target relation with high probabilities. Therefore, in our case, we directly employ the small labeled set SL obtained at the pattern-refinement stage to estimate (α, β) by solving max α,β X s∈SL log Pα,β(Ls, Y s), where the closed-form solutions are αi = P s∈SL 1{Ls i =Y s} P s∈SL h 1{Ls i =Y s} + 1{Ls i =−Y s} i, βi = P s∈SL h 1{Ls i =Y s} + 1{Ls i =−Y s} i |SL| , for each i ∈{1, · · · , m}. After estimating these parameters, we can infer the true label distribution by the posterior Pα,β(Y s|Ls) and use the denoised soft label to train a better NRE model, just as Ratner et al. (2016) did. A.2 Hyper-parameters of the NRE model For the NRE model, we implement a simple yet effective LSTM-based architecture described in (Zhou et al., 2016). We conduct the hyperparameter search via cross-validation and adopt the following configurations that can produce pretty good results for all 14 tasks. First, the word embedding table (dw = 100) is initialized with Glove vectors (Pennington et al., 2014), the size of the position vector (dp) is 5, the maximum length of the encoded relative distance is 60, and we follow (Zeng et al., 2015; Lin et al., 2016) to randomly initialize these position vectors. Besides, the LSTM hidden size is 200, and the dropout probabilities at the embedding layer, the LSTM layer and the last layer are 0.3, 0.3 and 0.5, respectively. During training, we employ the Adam (Kingma and Ba, 2014) optimizer with the learning rate of 0.001 and the batch size of 50. Moreover, we select the best epoch according to the score on the validation set. Notably, we observe that when training on data with large labeling noises, different parameter initializations can heavily influence the extraction performance of trained models. Therefore, as mentioned in the main body, to clearly and fairly show the actual impact of different types of training labels, we restart the training of NRE models with 5 random seeds, ranging from 0 to 4, for each case and report the averaged scores.
2019
137
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1430–1440 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1430 Multi-Grained Named Entity Recognition Congying Xia1,5, Chenwei Zhang1, Tao Yang2, Yaliang Li3∗, Nan Du2, Xian Wu2, Wei Fan2, Fenglong Ma4, Philip Yu1,5 1University of Illinois at Chicago, Chicago, IL, USA 2Tencent Medical AI Lab, Palo Alto, CA, USA; 3Alibaba Group, Bellevue, WA, USA 4University at Buffalo, Buffalo, NY, USA; 5Zhejiang Lab, Hangzhou, China {cxia8,czhang99,psyu}@uic.edu; [email protected] {tytaoyang,kevinxwu,davidwfan}@tencent.com [email protected]; [email protected] Abstract This paper presents a novel framework, MGNER, for Multi-Grained Named Entity Recognition where multiple entities or entity mentions in a sentence could be nonoverlapping or totally nested. Different from traditional approaches regarding NER as a sequential labeling task and annotate entities consecutively, MGNER detects and recognizes entities on multiple granularities: it is able to recognize named entities without explicitly assuming non-overlapping or totally nested structures. MGNER consists of a Detector that examines all possible word segments and a Classifier that categorizes entities. In addition, contextual information and a self-attention mechanism are utilized throughout the framework to improve the NER performance. Experimental results show that MGNER outperforms current state-of-the-art baselines up to 4.4% in terms of the F1 score among nested/non-overlapping NER tasks. 1 Introduction Effectively identifying meaningful entities or entity mentions from the raw text plays a crucial part in understanding the semantic meanings of natural language. Such a process is usually known as Named Entity Recognition (NER) and it is one of the fundamental tasks in natural language processing (NLP). A typical NER system takes an utterance as the input and outputs identified entities, such as person names, locations, and organizations. The extracted named entities can benefit various subsequent NLP tasks, including syntactic parsing (Koo and Collins, 2010), question answering (Krishnamurthy and Mitchell, 2015) and relation extraction (Lao and Cohen, 2010). However, accurately recognizing representative entities in natural language remains challenging. ∗Work was done when the author Yaliang Li was at Tencent America. Previous works treat NER as a sequence labeling problem. For example, Lample et al. (2016) achieve a decent performance on NER by incorporating deep recurrent neural networks (RNNs) with conditional random field (CRF) (Lafferty et al., 2001). However, a critical problem that arises by treating NER as a sequence labeling task is that it only recognizes non-overlapping entities in a single, sequential scan on the raw text; it fails to detect nested named entities which are embedded in longer entity mentions, as illustrated in Figure 1. Facility Last night , at the Chinese embassy in France , there was a holiday atmosphere . GPE GPE Figure 1: An example from the ACE-2004 dataset (Doddington et al., 2004) in which two GPEs (Geographical Entities) are nested in a Facility Entity. Due to the semantic structures within natural language, nested entities can be ubiquitous: e.g. 47% of the entities in the test split of ACE-2004 (Doddington et al., 2004) dataset overlap with other entities, and 42% of the sentences contain nested entities. Various approaches (Alex et al., 2007; Lu and Roth, 2015; Katiyar and Cardie, 2018; Muis and Lu, 2017; Wang and Lu, 2018) have been proposed in the past decade to extract nested named entities. However, these models are designed explicitly for recognizing nested named entities. They usually do not perform well on nonoverlapping named entity recognition compared to sequence labeling models. To tackle the aforementioned drawbacks, we propose a novel neural framework, named MGNER, for Multi-Grained Named Entity Recognition. It is suitable for tackling both Nested NER and Non-overlapping NER. The idea 1431 of MGNER is natural and intuitive, which is to first detect entity positions in various granularities via a Detector and then classify these entities into different pre-defined categories via a Classifier. MGNER has five types of modules: Word Processor, Sentence Processor, Entity Processor, Detection Network, and Classification Network, where each module can adopt a wide range of neural network designs. In summary, the contributions of this work are: • We propose a novel neural framework named MGNER for Multi-Grained Named Entity Recognition, aiming to detect both nested and non-overlapping named entities effectively in a single model. • MGNER is highly modularized. Each module in MGNER can adopt a wide range of neural network designs. Moreover, MGNER can be easily extended to many other related information extraction tasks, such as chunking (Ramshaw and Marcus, 1999) and slot filling (Mesnil et al., 2015). • Experimental results show that MGNER is able to achieve new state-of-the-art results on both Nested Named Entity Recognition tasks and Non-overlapping Named Entity Recognition tasks. 2 Related Work Existing approaches for recognizing nonoverlapping named entities usually treat the NER task as a sequence labeling problem. Various sequence labeling models achieve decent performance on NER, including probabilistic graph models such as Conditional Random Fields (CRF) (Ratinov and Roth, 2009), and deep neural networks like recurrent neural networks or convolutional neural networks (CNN). Hammerton (2003) is the first work to use Long Short-Term Memory (LSTM) for NER. Collobert et al. (2011) employ a CNN-CRF structure, which obtains competitive results to statistical models. Most recent works leverage an LSTM-CRF architecture. Huang et al. (2015) use hand-crafted spelling features; Ma and Hovy (2016) and Chiu and Nichols (2016) utilize a character CNN to represent spelling characteristics; Lample et al. (2016) employ a character LSTM instead. Moreover, the attention mechanism is also introduced in NER to dynamically decide how much information to use from a word or character level component (Rei et al., 2016). External resources have been used to further improve the NER performance. Peters et al. (2017) add pre-trained context embeddings from bidirectional language models to NER. Peters et al. (2018) learn a linear combination of internal hidden states stacked in a deep bidirectional language model, ELMo, to utilize both higher-level states which capture context-dependent aspects and lower-level states which model aspects of syntax. These sequence labeling models can only detect non-overlapping entities and fail to detect nested ones. Various approaches have been proposed for Nested Named Entity Recognition. Finkel and Manning (2009) propose a CRF-based constituency parser which takes each named entity as a constituent in the parsing tree. Ju et al. (2018) dynamically stack multiple flat NER layers and extract outer entities based on the inner ones. Such model may suffer from the error propagation problem if shorter entities are recognized incorrectly. Another series of approaches for Nested NER are based on hypergraphs. The idea of using hypergraph is first introduced in Lu and Roth (2015), which allows edges to be connected to different types of nodes to represent nested entities. Muis and Lu (2017) use a multigraph representation and introduce the notion of mention separator for nested entity detection. Both Lu and Roth (2015) and Muis and Lu (2017) rely on the hand-crafted features to extract nested entities and suffer from structural ambiguity issue. Wang and Lu (2018) present a neural segmental hypergraph model using neural networks to obtain distributed feature representation. Katiyar and Cardie (2018) also adopt a hypergraph-based formulation and learn the structure using an LSTM network in a greedy manner. One issue of these hypergraph approaches is the spurious structures of hypergraphs as they enumerate combinations of nodes, types and boundaries to represent entities. In other words, these models are specially designed for the nested named entities and are not suitable for the non-overlapping named entity recognition. Xu et al. (2017) propose a local detection method which relies on a Fixed-size Ordinally Forgetting Encoding (FOFE) method to encode utterance and a simple feed-forward neural network to either reject or predict the entity label for each 1432 Context Representation Context Self-Attention Attentive Context Word Processor Fully Connected ELMo Word LSTM Entity Representation Hidden States Entity LSTM Hidden States Sentence LSTM Word Processor Char level Character Embedding Character LSTM Word level Word Emb Word Representation Postag Emb ELMo Category Probabilities Char level Character Embedding Character LSTM Word level Word LSTM Word Representation Sentence Representation Postag Emb Fully Connected Attention LSTM Context-aware Entity Representation The Detector Sentence Processor Detection Network The Classifier Word Emb Entity Processor Classification Network Entity Probabilities Chinese France the Chinese embassy in France Last night , at the Chinese embassy in France … JJ NN , IN DT JJ NN IN NNP … the Chinese embassy in France DT JJ NN IN NNP Output Input Figure 2: The framework of MGNER for Multi-Grained Named Entity Recognition. It consists of a Detector and a Classifier. individual text fragment (Luan et al., 2018; Lee et al., 2017; He et al., 2018). Their model is in the same track with the framework we proposed whereas the difference is that we separate the NER task into two stages, i.e., detecting entity positions and classifying entity categories. 3 The Proposed Framework An overview of the proposed MGNER framework for multi-grained entity recognition, is illustrated in Figure 2. Specifically, MGNER consists of two sub-networks: the Detector and the Classifier. The Detector detects all the possible entity positions while the Classifier aims at classifying detected entities into pre-defined entity categories. The Detector has three modules: 1) Word Processor which extracts word-level semantic features, 2) Sentence Processor that learns context information for each utterance and 3) Detection Network that decides whether a word segment is an entity or not. The Classifier consists of 1) Word Processor which has the same structure as the one in the Detector, 2) Entity Processor that obtains entity features and 3) Classification Network that classifies entity into pre-defined categories. In addition, a self-attention mechanism is adopted in the Entity Processor to help the model capture and utilize entity-related contextual information. Each module in MGNER can be replaced with a wide range of different neural network designs. For example, BERT (Devlin et al., 2018) can be used as the Word Processor and a capsule model (Sabour et al., 2017; Xia et al., 2018) can be integrated into the Classification Network. It is worth mentioning that in order to improve the learning speed as well as the performance of MGNER, the Detector and the Classifier are trained with a series of shared input features, including the pre-trained word embeddings and the pre-trained language model features. Sentencelevel semantic features trained in the Detector are also transferred into the Classifier to introduce and utilize the contextual information. We present the key building blocks and the properties of the Detector in Section 3.1 and the Classifier in Section 3.2, respectively. 3.1 The Detector The Detector is aimed at detecting possible entity positions within each utterance. It takes an utterance as the input and outputs a set of entity candidates. Essentially, we use a semi-supervised neural network inspired by (Peters et al., 2017) 1433 to model this process. The architecture of the Detector is illustrated in the left part of Figure 2. Three major modules are contained in the Detector: Word Processor, Sentence Processor and Detection Network. More specifically, pre-trained word embeddings, POS tag information and character-level word information are used for generating semantically meaningful word representations. Word representations obtained from the Word Processor and the language model embeddings—ELMo (Peters et al., 2018), are concatenated together to produce context-aware sentence representations. Each possible word segment is then examined in the Detection Network and to be decided whether accepted it as an entity or not. 3.1.1 Word Processor Word Processor extracts semantically meaningful word representation for each token. Given an input utterance with K tokens (t1, ..., tK), each token tk(1 ≤k ≤K) is represented as xk = [wk; pk; ck], by using a concatenation of a pre-trained word embedding wk, POS tag embedding pk if it exists, and a character-level word information ck. The pre-trained word embedding wk with a dimension Dw is obtained from GloVe (Pennington et al., 2014). The character-level word information ck is obtained with a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) layer to capture the morphological information. The hidden size of this character LSTM is set as Dcl. As shown in the bottom of Figure 2, character embeddings are fed into the character LSTM. The final hidden states from the forward and backward character LSTM are concatenated as the character-level word information ck. Those POS tagging embeddings and character embeddings are randomly initialized and learned within the learning process. 3.1.2 Sentence Processor To learn the contextual information from each sentence, another bidirectional LSTM, named word LSTM, is applied to sequentially encode the utterance. For each token, the forward hidden states → hk and the backward hidden states ← hk are concatenated into the hidden states hk. The dimension of the hidden states of the word LSTM is set as Dwl. → hk = LSTMfw(xk, ← hk−1), ← hk = LSTMbw(xk, ← hk+1), hk = [ → hk; ← hk]. (1) Besides, we also utilize the language model embeddings pre-trained in an unsupervised way as the ELMo model in (Peters et al., 2018). The pretrained ELMo embeddings and the hidden states in the word LSTM hk are concatenated. Hence, the concatenated hidden states hk for each token can be reformulated as: hk = [ → hk; ← hk; ELMok], (2) where ELMok is the ELMo embeddings for token tk. Speficially, a three-layer bi-LSTM neural network is trained as the language model. Since the lower-level LSTM hidden states have the ability to model syntax properties and higher-level LSTM hidden states can capture contextual information, ELMo computes the language model embeddings as a weighted combination of all the bidirectional LSTM hidden states: ELMok = γ XL l=0 ujhLM k,l , (3) where γ is a task-specified scale parameter which indicates the importance of the entire ELMo vector to the NER task. L is the number of layers used in the pre-trained language model, the vector u = [u0, · · · , uL] represents softmax-normalized weights that combine different layers. hLM k,l is the language model hidden state of layer l at the time step k. A sentence bidirectional LSTM layer with a hidden dimension of Dsl is employed on top of the concatenated hidden states hk. The forward and backward hidden states in this sentence LSTM are concatenated for each token as the final sentence representation fk ∈R2Dsl. 3.1.3 Detection Network Using the semantically meaningful features obtained in fk, we can identify possible entities within each utterance. The strategy of finding entities is to first generate all the word segments as entity proposals and then estimate the probability of each proposal as being an entity or not. To enumerate all possible entity proposals, different lengths of entity proposals are generated 1434 surrounding each token position. For each token position, R entity proposals with the length varies from 1 to the maximum length R are generated. Specifically, it is assumed that an input utterance consists of a sequence of N tokens (t1, t2, t3, t4, t5, t6, ..., tN). To balance the performance and the computational cost, we set R as 6. We take each token position as the center and generate 6 proposals surrounding it. All the possible 6N proposals under the max-length of 6 will be generated. As shown in Figure 3, the entity proposals generated surrounding token t3 are: (t3), (t3, t4), (t2, t3, t4), (t2, t3, t4, t5), (t1, t2, t3, t4, t5), (t1, t2, t3, t4, t5, t6). Similar entity proposals are generated for all the token positions and proposals that contain invalid indexes like (t0,t1,t2) will be deleted. Hence we can obtain all the valid entity proposals under the condition that the max length is R. Proposal 1: t2 t1 t3 t4 Proposal 2: t2 t1 t3 t4 t5 Proposal 3: t2 t1 t3 t4 t5 Proposal 4: t2 t1 t3 t4 t5 Proposal 5: t2 t1 t3 t4 t5 Proposal 6: t2 t1 t3 t4 t5 t5 t6 t6 t6 t6 t6 t6 Figure 3: All possible entity proposals generated surrounding token t3 when the maximum length of an entity proposal R is set as 6. For each token, we simultaneously estimate the probability of a proposal being an entity or not for R proposals. A fully connected layer with a twoclass softmax function is used to determine the quality of entity proposals: sk = softmax (fkWp + bp) , (4) where Wp ∈R2Dsl×2R and bp ∈R2R are weights and the bias for the entity proposal layer; sk contains 2R scores including R scores for being an entity and R scores for not being an entity at position k. The cross-entropy loss is employed in the Detector as follows: Lp = − XK k=1 XR r=1 yr k log sr k, (5) where yr k is the label for proposal type r at position k and sr k is the probability of being an entity for proposal type r at position k. It is worth mentioning that, most entity proposals are negative proposals. Thus, to balance the influence of positive proposals and negative proposals in the loss function, we keep all positive proposals and use down-sampling for negative proposals when calculating the loss Lp. For each batch, we fix the number of the total proposals, including all positive proposals and sampled negative proposals, used in the loss function as Nb. In the inference procedure of the Detection Network, an entity proposal will be recognized as an entity candidate if its score of being an entity is higher than score of not being an entity. 3.2 The Classifier The Classifier module aims at classifying entity candidates obtained from the Detector into different pre-defined entity categories. For the nested NER task, all the proposed entities will be saved and fed into the Classifier. For the NER task which has non-overlapping entities, we utilize the non-maximum suppression (NMS) algorithm (Neubeck and Van Gool, 2006) to deal with redundant, overlapping entity proposals and output real entity candidates. The idea of NMS is simple but effective: picking the entity proposal with the maximum probability, deleting conflict entity proposals, and repeating the previous process until all the proposals are processed. Eventually, we can get those non-conflict entity candidates as the input of the Classifier. To understand the contextual information of the proposed entity, we utilize both sentence-level context information and a self-attention mechanism to help the model focus on entity-related context tokens. The framework of the Classifier is shown in the right part of Figure 2. Essentially, it consists of three modules: Word Processor, Entity Processor and Classification Network. 3.2.1 Word Processor A same Word Processor as in the Detector is used here to get the word representation for the entity candidates obtained from the Detector. The wordlevel embedding, which is the concatenation of pre-trained word embedding and POS tag embedding if it is exists, is transferred from the Word Processor in the Detector to improve the performance as well as to speed up the learning process. The character-level LSTM and character embeddings are trained separately in the Detector and the Classifier. 1435 ACE-2004 ACE-2005 CoNLL-2003 TRAIN DEV TEST TRAIN DEV TEST TRAIN DEV TEST sentences #total 6,799 829 879 7,336 958 1,047 14,987 3,466 3,684 #overlaps 2,683(39%) 293(35%) 373(42%) 2,683 (37%) 340(35%) 330 (32%) entities #total 22,207 2,511 3,031 24,687 3,217 3,027 23,499 5,942 5,648 #overlaps 10,170 (46%) 1,091(43%) 1,418 (47%) 9,937 (40%) 1,192(37%) 1,184 (39%) length >6 1,439 (6%) 179(7%) 199 (7%) 1,343 (5%) 148(5%) 160 (6%) 23(0.1%) 8(0.1%) 0 (0%) max length 57 35 43 49 30 27 10 10 6 Table 1: Corpora Statistics for the ACE-2004, ACE-2005 and CoNLL-2003 datasets. 3.2.2 Entity Processor The word representation is fed into a bidirectional word LSTM with hidden size Dwl and the hidden states are concatenated with the ELMo language model embeddings as the entity features. A bidirectional LSTM with hidden size Del is applied to the entity feature to capture sequence information among the entity words. The last hidden states of the forward and backward Entity LSTM are concatenated as the entity representation e ∈R2Del. The same word in different contexts may have different semantic meanings. To this end, in our model, we take the contextual information into consideration when learning the semantic representations of entity candidates. We capture the contextual information from other words in the same utterance. Denote c as the context feature vector for these context words, and it can be extracted from the sentence representation fk in the Detector. Hence, the sentence features trained in the Detector is directly transferred to the Classifier. An easy way to model context words is to concatenate all the word representations or average them. However, this naive approach may fail when there exists a lot of unrelated context words. To select high-relevant context words and learn an accurate contextual representation, we propose a self-attention mechanism to simulate and dynamically control the relatedness between the context and the entity. The self-attention module takes the entity representation e and all the context features C = [c1, c2, ..., cN] as the inputs, and outputs a vector of attention weights a: a = softmax(CWeT ), (6) where W ∈R2Dsl×2Del is a weight matrix for the self-attention layer, and a is the self-attention weight on different context words. To help the model focus on entity-related context, the attentive vector Catt is calculated as the attention-weighted context: Catt = a ∗C. (7) The lengths of the attentive context Catt varies in different contexts. However, the goal of the Classification Network is to classify entity candidates into different categories, and thus it requires a fixed embedding size. We achieve that by adding another LSTM layer. An Attention LSTM with the hidden dimension Dml is used and the concatenation of the last hidden states in the forward and backward LSTM layer as the context representation m ∈R2Dml. Hence the shape of the context representation is aligned. We concatenate the context representation and the entity representation together as a context-aware entity representation to classify entity candidates: o = [m; e]. 3.2.3 Classification Network A two-layer fully connected neural network is used to classify candidates into pre-defined categories: p = softmax (Wc2 (σ (oWc1 + bc1)) + bc2) , (8) where Wc1 ∈R(2Dml+2Del)×Dh, bc1 ∈RDh , Wc2 ∈RDc1×(Dt+1), bc2 ∈RDt+1 are the weights for this fully connected neural network, and Dt is the number of entity types. Actually, this classification function classifies entity candidates into (Dt + 1) types. Here we add one more type as for the scenario that a candidate may not be a real entity. Finally, the hinge-ranking loss is adopted in the Classification Network: Lc = X yw∈Yw max {0, ∆+ pyw −pyr} , (9) where pw is the probability for the wrong labels yw, pr is the probability for the right label yr, and ∆is a margin. The hinge-rank loss urges the probability for the right label higher than the probability for the wrong labels and improves the classification performance. 4 Experiments To show the ability and effectiveness of our proposed framework, MGNER, for Multi-Grained 1436 Named Entity Recognition, we conduct the experiments on both Nested NER task and traditional non-overlapping NER task. 4.1 Datasets We mainly evaluate our framework on ACE-2004 and ACE-2005 (Doddington et al., 2004) with the same splits used by previous works (Luo et al., 2015; Wang and Lu, 2018) for the nested NER task. Specifically, seven different types of entities such as person, facility, weapon and vehicle, are contained in the ACE datasets. For the traditional NER task, we use the CoNLL-2003 dataset (Tjong Kim Sang and De Meulder, 2003) which contains four types of named entities: location, organization, person and miscellaneous. An overview of these three datasets is illustrated in Table 1. It can be observed that most entities are less or equal to 6 tokens, and thus we select the maximum entity length R = 6. 4.2 Implementation Details We performed random search (Bergstra and Bengio, 2012) for hyper-parameter optimization and selected the best setting based on performance on the development set. We employ the Adam optimizer (Kingma and Ba, 2014) with learning rate decay for all the experiments. The learning rate is set as 0.001 at the beginning and exponential decayed by 0.9 after each epoch. The batch size of utterances is set as 20. In order to balance the influence of positive proposals and negative proposals, we use down-sampling for negative ones and the total proposal number Nb for each batch is 128. To alleviate over-fitting, we add dropout regularizations after the word representation layer and all the LSTM layers with a dropout rate of 0.5. In addition, we employ the early stopping strategy when there is no performance improvement on the development dataset after three epochs. The pretrained word embeddings are from GloVe (Pennington et al., 2014), and the word embedding dimension Dw is 300. Besides, the ELMo 5.5B data1 is utilized in the experiment for the language model embedding. Moreover, the size of character embedding ck is 100, and the hidden size of the Character LSTM Dcl is also 100. The size of POS tag embedding pk is 300 for the ACE datassets and no POS tag information is used in the CoNLL-2003 dataset. The hidden dimensions of 1https://allennlp.org/elmo the Word LSTM layer Dwl, the Sentence LSTM layer Dsl, the Entity LSTM layer Del and the Attention LSTM layer Dml are all set to 300. The hidden dimension of the classification layer Dh is 50. The margin ∆in the hinge-ranking loss for the entity category classification is set to 5. The ELMo scale parameter γ used in the Detector is 3.35 and 3.05 in the Classifier, respectively. MODEL ACE-2004 ACE-2005 P R F1 P R F1 Lu and Roth (2015) 70.0 56.9 62.8 66.3 59.2 62.5 Lample et al. (2016) 71.3 50.5 58.3 64.1 52.4 57.6 Muis and Lu (2017) 72.7 58.0 64.5 69.1 58.1 63.1 Xu et al. (2017) 68.2 54.3 60.5 67.4 55.1 60.6 Katiyar and Cardie (2018) 73.6 71.8 72.7 70.6 70.4 70.5 Ju et al. (2018) 74.2 70.3 72.2 Wang et al. (2018) 74.9 71.8 73.3 74.5 71.5 73.0 Wang and Lu (2018) 78.0 72.4 75.1 76.8 72.3 74.5 MGNER w/o context 79.8 76.3 78.0 79.6 75.6 77.5 MGNER w/o attention 81.5 76.5 78.9 79.4 76.0 77.7 MGNER 81.7 77.4 79.5 79.0 77.3 78.2 Table 2: Performance on ACE-2004 and ACE-2005 test set for the Nested NER task. 4.3 Results Nested NER Task. The proposed MGNER is very suitable for detecting nested named entities since every possible entity will be examined and classified. In order to validate this advantage, we compare MGNER with numerous baseline models: 1) Lu and Roth (2015) which propose the mention hypergraphs for recognizing overlapping entities; 2) Lample et al. (2016) which adopt the LSTM-CRF stucture for sequence labelling; 3) Muis and Lu (2017) which introduce mention separators to tag gaps between words for recognizing overlapping mentions; 4) Xu et al. (2017) that propose a local detection method; 5) Katiyar and Cardie (2018) which propose a hypergraph-based model using LSTM for learning feature representations; 6) Ju et al. (2018) that use a layered model which extracts outer entities based on inner ones; 7) Wang et al. (2018) which propose a neural transition-based model that constructs nested mentions through a sequence of actions; 8) Wang and Lu (2018) which adopt a neural segmental hypergraph model. Experiment results of the Nested NER task on the ACE-2004 and ACE-2005 datasets are reported in Table 2. We can observe from Table 2 that, our proposed framework MGNER outperforms all the baseline approaches. For both datasets, our model improves the state-of-the-art 1437 result by around 4% in terms of precision, recall, as well as the F1 score. To study the contribution of different modules in MGNER, we also report the performance of two ablation variations of the proposed MGNER at the bottom of Table 2. MGNER w/o attention is a variation of MGNER which removes the selfattention mechanism and MGNER w/o context removes all the context information. To remove the self-attention mechanism, we feed the context feature C directly into a bi-directional LSTM to obtain context representation m, other than the attentive context vector Catt. As for MGNER w/o context, we only use entity representation e to do classification other than the context-aware entity representation o. By adding the context information, the F1 score improves 0.9% on the ACE-2004 dataset and 0.7% on the ACE-2005 dataset. The self-attention mechanism improves the F1 score by 0.6% on the ACE-2004 dataset and 0.5% on the ACE-2005 dataset. MODEL OVERLAPPING NON-OVERLAPPING P R F1 P R F1 Lu and Roth (2015) 68.1 52.6 59.4 64.1 65.1 64.6 Muis and Lu (2017) 70.4 55.0 61.8 67.2 63.4 65.2 Wang et al. (2018) 77.4 70.5 73.8 76.1 69.6 72.7 Wang and Lu (2018) 80.6 73.6 76.9 75.5 71.5 73.4 MGNER 82.6 76.0 79.2 77.8 79.5 78.6 Table 3: Results on different types of sentences (ACE2005). To analyze how well our model performs on overlapping and non-overlapping entities, we split the test data into two portions: sentences with and without overlapping entities (follow the splits used by Wang and Lu (2018)). Four state-of-theart nested NER models are compared with our proposed framework MGNER on the ACE-2005 dataset. As illustrated in Table 3, MGNER consistently performs better than the baselines on both portions, especially for the non-overlapping part. This observation indicates that our model can better recognize non-overlapping entities than previous nested NER models. The first step in MGNER is to detect entity positions using the Detector, where the effectiveness of proposing correct entity candidates immediately affects the performance of the whole model. To this end, we provide the experiment results of detecting correct entities in the Detector module here. The precision, recall and F1 score are 85.23 , 91.84, 88.41 for the ACE-2004 dataset and 84.95, 89.35, 87.09 for the ACE-2005 dataset. MODEL CoNLL-2003 DEV TEST Lu and Roth (2015) 89.2 83.8 Muis and Lu (2017) 84.3 Xu et al. (2017) 90.85 Wang and Lu (2018) 90.2 Lample et al. (2016) 90.94 Ma and Hovy (2016) 94.74 91.21 Chiu and Nichols (2016) 94.03 ± 0.23 91.62 ± 0.33 Peters et al. (2017) 91.93 ± 0.19 Peters et al. (2018) 92.22 ± 0.10 MGNER w/o context 95.21 ± 0.12 92.23 ± 0.06 MGNER w/o attention 95.23 ± 0.06 92.26 ± 0.09 MGNER 95.24 ± 0.13 92.28 ± 0.12 Table 4: F1 scores on CoNLL-2003 devlopement set (DEV) and test set (TEST) for the English NER task. Mean and standard deviation across five runs are reported. Pos tags information are not used. NER Task. We also evaluate the proposed MGNER framework on the NER task which needs to reorganize non-overlapping entities. Two types of baseline models are compared here: sequence labelling models which are designed specifically for non-overlapping NER task and nested NER models which also provide the ability to detect non-overlapping mentions. The first type of models including 1) Lample et al. (2016) which adopt the LSTM-CRF structure; 2) Ma and Hovy (2016) which use a LSTM-CNNs-CRF architecture; 3) Chiu and Nichols (2016) which propose a CNN-LSTM-CRF model; 4) Peters et al. (2017) which add semi-supervised language model embeddings; and 5) Peters et al. (2018) which utilize the state-of-the-art ELMo language model embeddings. The second types include four Nested models mentioned in the Nested NER section: 1) Luo et al. (2015); 2) Muis and Lu (2017); 3) Xu et al. (2017); 4) (Wang and Lu, 2018). Table 4 shows the F1 scores of different approaches on CoNLL-2003 devlopement set and test set for the English NER task. Mean and standard deviation across five runs are reported. It can be observed from Table 4 that the proposed MGNER model outperforms all the baselines. The models designed for non-overlapping entity detection usually performs better than Nested NER models for the NER task. Our proposed framework outperforms state-of-the-art results both on the NER and Nested NER task. Xu et al. (2017) is the best baseline model among the Nested models since it shares a similar idea of our proposed framework by individually examin1438 ing each entity proposal. From the ablation study, we can observe that by purely adding the context information, the F1 score on the CoNLL-2003 test set improves from 92.23 to 92.26, and by adding the attention mechanism, the F1 score improves to 92.28. We also provide the performance of detecting non-overlapping entities in the Detector here. The precision, recall and F1 score are 95.33, 95.69 and 95.51 on the CoNLL-2003 dataset. 5 Conclusions In this work, we propose a novel neural framework named MGNER for Multi-Grained Named Entity Recognition where multiple entities or entity mentions in a sentence could be non-overlapping or totally nested. MGNER is framework with high modularity and each component in MGNER can adopt a wide range of neural networks. Experimental results show that MGNER is able to achieve state-of-the-art results on both nested NER task and traditional non-overlapping NER task. Acknowledgments We thank the reviewers for their valuable comments. Special thanks go to Lu Wei from Singapore University of Technology and Design for sharing the datasets split details. This work is supported in part by NSF through grants IIS-1526499, IIS-1763325, and CNS-1626432. References Beatrice Alex, Barry Haddow, and Claire Grover. 2007. Recognising nested named entities in biomedical text. In Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing, pages 65–72. Association for Computational Linguistics. James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281–305. Jason PC Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics, 4:357–370. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493–2537. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw, Stephanie M Strassel, and Ralph M Weischedel. 2004. The automatic content extraction (ace) program-tasks, data, and evaluation. In LREC, volume 2, page 1. Jenny Rose Finkel and Christopher D Manning. 2009. Nested named entity recognition. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 141–150. Association for Computational Linguistics. James Hammerton. 2003. Named entity recognition with long short-term memory. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 172–175. Association for Computational Linguistics. Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 364–369. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1446–1459. Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 861–871. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1–11. Association for Computational Linguistics. 1439 Jayant Krishnamurthy and Tom M Mitchell. 2015. Learning a compositional semantics for freebase with an open predicate vocabulary. Transactions of the Association for Computational Linguistics, 3:257–270. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282–289. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL-HLT, pages 260–270. Ni Lao and William W Cohen. 2010. Relational retrieval using a combination of path-constrained random walks. Machine learning, 81(1):53–67. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188–197. Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857–867. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3219–3232. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint entity recognition and disambiguation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 879–888. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1064–1074. Gr´egoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, et al. 2015. Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(3):530–539. Aldrian Obaja Muis and Wei Lu. 2017. Labeling gaps between words: Recognizing overlapping mentions with mention separators. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2608–2618. Association for Computational Linguistics. Alexander Neubeck and Luc Van Gool. 2006. Efficient non-maximum suppression. In Pattern Recognition, 2006. ICPR 2006. 18th International Conference on, volume 3, pages 850–855. IEEE. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1756–1765. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237. Lance A Ramshaw and Mitchell P Marcus. 1999. Text chunking using transformation-based learning. In Natural language processing using very large corpora, pages 157–176. Springer. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning, pages 147– 155. Association for Computational Linguistics. Marek Rei, Gamal Crichton, and Sampo Pyysalo. 2016. Attending to characters in neural sequence labeling models. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 309–318. Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. 2017. Dynamic routing between capsules. In Advances in neural information processing systems, pages 3856–3866. Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 142–147. Association for Computational Linguistics. Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 204–214. 1440 Bailin Wang, Wei Lu, Yu Wang, and Hongxia Jin. 2018. A neural transition-based model for nested mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1011–1017. Congying Xia, Chenwei Zhang, Xiaohui Yan, Yi Chang, and Philip Yu. 2018. Zero-shot user intent detection via capsule neural networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3090–3099. Mingbin Xu, Hui Jiang, and Sedtawut Watcharawittayakul. 2017. A local detection approach for named entity recognition and mention detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1237–1247.
2019
138
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441–1451 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1441 ERNIE: Enhanced Language Representation with Informative Entities Zhengyan Zhang1,2,3∗, Xu Han1,2,3∗, Zhiyuan Liu1,2,3†, Xin Jiang4, Maosong Sun1,2,3, Qun Liu4 1Department of Computer Science and Technology, Tsinghua University, Beijing, China 2Institute for Artificial Intelligence, Tsinghua University, Beijing, China 3State Key Lab on Intelligent Technology and Systems, Tsinghua University, Beijing, China 4Huawei Noah’s Ark Lab {zhangzhengyan14,hanxu17}@mails.tsinghua.edu.cn Abstract Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks. However, the existing pre-trained language models rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better language understanding. We argue that informative entities in KGs can enhance language representation with external knowledge. In this paper, we utilize both large-scale textual corpora and KGs to train an enhanced language representation model (ERNIE), which can take full advantage of lexical, syntactic, and knowledge information simultaneously. The experimental results have demonstrated that ERNIE achieves significant improvements on various knowledge-driven tasks, and meanwhile is comparable with the state-of-the-art model BERT on other common NLP tasks. The source code and experiment details of this paper can be obtained from https:// github.com/thunlp/ERNIE. 1 Introduction Pre-trained language representation models, including feature-based (Mikolov et al., 2013; Pennington et al., 2014; Peters et al., 2017, 2018) and fine-tuning (Dai and Le, 2015; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019) approaches, can capture rich language information from text and then benefit many NLP applications. BERT (Devlin et al., 2019), as one of the most recently proposed models, obtains the stateof-the-art results on various NLP applications by simple fine-tuning, including named entity recognition (Sang and De Meulder, 2003), question ∗indicates equal contribution † Corresponding author: Z.Liu([email protected]) is_a is_a Song Book author composer Bob Dylan Chronicles: Volume One Blowin’ in the wind Songwriter Writer is_a is_a Bob Dylan wrote Blowin’ in the Wind in 1962, and wrote Chronicles: Volume One in 2004. Figure 1: An example of incorporating extra knowledge information for language understanding. The solid lines present the existing knowledge facts. The red dotted lines present the facts extracted from the sentence in red. The green dotdash lines present the facts extracted from the sentence in green. answering (Rajpurkar et al., 2016; Zellers et al., 2018), natural language inference (Bowman et al., 2015), and text classification (Wang et al., 2018). Although pre-trained language representation models have achieved promising results and worked as a routine component in many NLP tasks, they neglect to incorporate knowledge information for language understanding. As shown in Figure 1, without knowing Blowin’ in the Wind and Chronicles: Volume One are song and book respectively, it is difficult to recognize the two occupations of Bob Dylan, i.e., songwriter and writer, on the entity typing task. Furthermore, it is nearly impossible to extract the fine-grained relations, such as composer and author on the relation classification task. For the existing pre-trained language representation models, these two sentences are syntactically ambiguous, like “UNK wrote UNK in UNK”. Hence, considering rich knowledge information can lead to better language understanding and accordingly benefits various knowledge-driven applications, e.g. entity typing and relation classification. For incorporating external knowledge into language representation models, there are two main 1442 challenges. (1) Structured Knowledge Encoding: regarding to the given text, how to effectively extract and encode its related informative facts in KGs for language representation models is an important problem; (2) Heterogeneous Information Fusion: the pre-training procedure for language representation is quite different from the knowledge representation procedure, leading to two individual vector spaces. How to design a special pre-training objective to fuse lexical, syntactic, and knowledge information is another challenge. To overcome the challenges mentioned above, we propose Enhanced Language RepresentatioN with Informative Entities (ERNIE), which pretrains a language representation model on both large-scale textual corpora and KGs: (1) For extracting and encoding knowledge information, we firstly recognize named entity mentions in text and then align these mentions to their corresponding entities in KGs. Instead of directly using the graph-based facts in KGs, we encode the graph structure of KGs with knowledge embedding algorithms like TransE (Bordes et al., 2013), and then take the informative entity embeddings as input for ERNIE. Based on the alignments between text and KGs, ERNIE integrates entity representations in the knowledge module into the underlying layers of the semantic module. (2) Similar to BERT, we adopt the masked language model and the next sentence prediction as the pre-training objectives. Besides, for the better fusion of textual and knowledge features, we design a new pre-training objective by randomly masking some of the named entity alignments in the input text and asking the model to select appropriate entities from KGs to complete the alignments. Unlike the existing pre-trained language representation models only utilizing local context to predict tokens, our objectives require models to aggregate both context and knowledge facts for predicting both tokens and entities, and lead to a knowledgeable language representation model. We conduct experiments on two knowledgedriven NLP tasks, i.e., entity typing and relation classification. The experimental results show that ERNIE significantly outperforms the state-of-theart model BERT on these knowledge-driven tasks, by taking full advantage of lexical, syntactic, and knowledge information. We also evaluate ERNIE on other common NLP tasks, and ERNIE still achieves comparable results. 2 Related Work Many efforts are devoted to pre-training language representation models for capturing language information from text and then utilizing the information for specific NLP tasks. These pre-training approaches can be divided into two classes, i.e., feature-based approaches and finetuning approaches. The early work (Collobert and Weston, 2008; Mikolov et al., 2013; Pennington et al., 2014) focuses on adopting feature-based approaches to transform words into distributed representations. As these pre-trained word representations capture syntactic and semantic information in textual corpora, they are often used as input embeddings and initialization parameters for various NLP models, and offer significant improvements over random initialization parameters (Turian et al., 2010). Since these word-level models often suffer from the word polysemy, Peters et al. (2018) further adopt the sequence-level model (ELMo) to capture complex word features across different linguistic contexts and use ELMo to generate context-aware word embeddings. Different from the above-mentioned featurebased language approaches only using the pretrained language representations as input features, Dai and Le (2015) train auto-encoders on unlabeled text, and then use the pre-trained model architecture and parameters as a starting point for other specific NLP models. Inspired by Dai and Le (2015), more pre-trained language representation models for fine-tuning have been proposed. Howard and Ruder (2018) present AWDLSTM (Merity et al., 2018) to build a universal language model (ULMFiT). Radford et al. (2018) propose a generative pre-trained Transformer (Vaswani et al., 2017) (GPT) to learn language representations. Devlin et al. (2019) propose a deep bidirectional model with multiplelayer Transformers (BERT), which achieves the state-of-the-art results for various NLP tasks. Though both feature-based and fine-tuning language representation models have achieved great success, they ignore the incorporation of knowledge information. As demonstrated in recent work, injecting extra knowledge information can significantly enhance original models, such as reading comprehension (Mihaylov and Frank, 2018; Zhong et al., 2018), machine translation (Zaremoodi et al., 2018), natural language 1443 e(i−1) 1 e(i−1) 2 bob dylan wrote w (i−1) 1 w (i−1) 2 w (i−1) 3 ··· w(i−1) n 1962 Multi-Head Attention Multi-Head Attention Information Fusion w (i) 1 w (i) 2 e (i) 1 w(i) n e (i) 2 w(i) 3 e (i) 1 e (i) 2 ˜e (i) 1 ˜e (i) 2 ˜w (i) 1 ˜w (i) 2 ˜w(i) 3 ˜w(i) n ··· ··· ˜e (i) 2 Token Input Entity Input Token Output Entity Output Bob Dylan wrote Blowin’ in the Wind in 1962 blow w (i−1) 4 ˜w (i) 4 w (i) 4 Multi-Head Attention Feed Forward Nx Multi-Head Attention Information Fusion Token Input Multi-Head Attention Entity Input Mx Token Output Entity Output Blowin’ in the Wind ˜e (i) 1 Bob Dylan Aggregator Transformer Aggregator (a) Model Achitecture (b) Aggregator K-Encoder T-Encoder Figure 2: The left part is the architecture of ERNIE. The right part is the aggregator for the mutual integration of the input of tokens and entities. Information fusion layer takes two kinds of input: one is the token embedding, and the other one is the concatenation of the token embedding and entity embedding. After information fusion, it outputs new token embeddings and entity embeddings for the next layer. inference (Chen et al., 2018), knowledge acquisition (Han et al., 2018a), and dialog systems (Madotto et al., 2018). Hence, we argue that extra knowledge information can effectively benefit existing pre-training models. In fact, some work has attempted to joint representation learning of words and entities for effectively leveraging external KGs and achieved promising results (Wang et al., 2014; Toutanova et al., 2015; Han et al., 2016; Yamada et al., 2016; Cao et al., 2017, 2018). Sun et al. (2019) propose the knowledge masking strategy for masked language model to enhance language representation by knowledge 1. In this paper, we further utilize both corpora and KGs to train an enhanced language representation model based on BERT. 3 Methodology In this section, we present the overall framework of ERNIE and its detailed implementation, including the model architecture in Section 3.2, the novel pre-training task designed for encoding informative entities and fusing heterogeneous information in Section 3.4, and the details of the fine-tuning procedure in Section 3.5. 1It is a coincidence that both Sun et al. (2019) and we chose ERNIE as the model names, which follows the interesting naming habits like ELMo and BERT. Sun et al. (2019) released their code on March 16th and submitted their paper to Arxiv on April 19th while we submitted our paper to ACL whose deadline is March 4th. 3.1 Notations We denote a token sequence as {w1, . . . , wn} 2, where n is the length of the token sequence. Meanwhile, we denote the entity sequence aligning to the given tokens as {e1, . . . , em}, where m is the length of the entity sequence. Note that m is not equal to n in most cases, as not every token can be aligned to an entity in KGs. Furthermore, we denote the whole vocabulary containing all tokens as V, and the entity list containing all entities in KGs as E. If a token w ∈V has a corresponding entity e ∈E, their alignment is defined as f(w) = e. In this paper, we align an entity to the first token in its named entity phrase, as shown in Figure 2. 3.2 Model Architecture As shown in Figure 2, the whole model architecture of ERNIE consists of two stacked modules: (1) the underlying textual encoder (T-Encoder) responsible to capture basic lexical and syntactic information from the input tokens, and (2) the upper knowledgeable encoder (K-Encoder) responsible to integrate extra token-oriented knowledge information into textual information from the underlying layer, so that we can represent heterogeneous information of tokens and entities into a united feature space. Besides, we denote the number of T-Encoder layers as N, and the number 2In this paper, tokens are at the subword level. 1444 of K-Encoder layers as M. To be specific, given a token sequence {w1, . . . , wn} and its corresponding entity sequence {e1, . . . , em}, the textual encoder firstly sums the token embedding, segment embedding, positional embedding for each token to compute its input embedding, and then computes lexical and syntactic features {w1, . . . , wn} as follows, {w1, . . . , wn} = T-Encoder({w1, . . . , wn}), (1) where T-Encoder(·) is a multi-layer bidirectional Transformer encoder. As T-Encoder(·) is identical to its implementation in BERT and BERT is prevalent, we exclude a comprehensive description of this module and refer readers to Devlin et al. (2019) and Vaswani et al. (2017). After computing {w1, . . . , wn}, ERNIE adopts a knowledgeable encoder K-Encoder to inject the knowledge information into language representation. To be specific, we represent {e1, . . . , em} with their entity embeddings {e1, . . . , em}, which are pre-trained by the effective knowledge embedding model TransE (Bordes et al., 2013). Then, both {w1, . . . , wn} and {e1, . . . , em} are fed into K-Encoder for fusing heterogeneous information and computing final output embeddings, {wo 1, . . . , wo n}, {eo 1, . . . , eo n} = K-Encoder( {w1, . . . , wn}, {e1, . . . , em}). (2) {wo 1, . . . , wo n} and {eo 1, . . . , eo n} will be used as features for specific tasks. More details of the knowledgeable encoder K-Encoder will be introduced in Section 3.3. 3.3 Knowledgeable Encoder As shown in Figure 2, the knowledgeable encoder K-Encoder consists of stacked aggregators, which are designed for encoding both tokens and entities as well as fusing their heterogeneous features. In the i-th aggregator, the input token embeddings {w(i−1) 1 , . . . , w(i−1) n } and entity embeddings {e(i−1) 1 , . . . , e(i−1) m } from the preceding aggregator are fed into two multi-head self-attentions (MH-ATTs) (Vaswani et al., 2017) respectively, { ˜ w(i) 1 , . . . , ˜ w(i) n } = MH-ATT({w(i−1) 1 , . . . , w(i−1) n }), {˜e(i) 1 , . . . , ˜e(i) m } = MH-ATT({e(i−1) 1 , . . . , e(i−1) m }). (3) Then, the i-th aggregator adopts an information fusion layer for the mutual integration of the token and entity sequence, and computes the output embedding for each token and entity. For a token wj and its aligned entity ek = f(wj), the information fusion process is as follows, hj = σ( ˜ W (i) t ˜ w(i) j + ˜ W (i) e ˜e(i) k + ˜b(i)), w(i) j = σ(W (i) t hj + b(i) t ), e(i) k = σ(W (i) e hj + b(i) e ). (4) where hj is the inner hidden state integrating the information of both the token and the entity. σ(·) is the non-linear activation function, which usually is the GELU function (Hendrycks and Gimpel, 2016). For the tokens without corresponding entities, the information fusion layer computes the output embeddings without integration as follows, hj = σ( ˜ W (i) t ˜ w(i) j + ˜b(i)), w(i) j = σ(W (i) t hj + b(i) t ). (5) For simplicity, the i-th aggregator operation is denoted as follows, {w(i) 1 , . . . , w(i) n }, {e(i) 1 , . . . , e(i) m } = Aggregator( {w(i−1) 1 , . . . , w(i−1) n }, {e(i−1) 1 , . . . , e(i−1) m }). (6) The output embeddings of both tokens and entities computed by the top aggregator will be used as the final output embeddings of the knowledgeable encoder K-Encoder. 3.4 Pre-training for Injecting Knowledge In order to inject knowledge into language representation by informative entities, we propose a new pre-training task for ERNIE, which randomly masks some token-entity alignments and then requires the system to predict all corresponding entities based on aligned tokens. As our task is similar to training a denoising auto-encoder (Vincent et al., 2008), we refer to this procedure as a denoising entity auto-encoder (dEA). Considering that the size of E is quite large for the softmax layer, we thus only require the system to predict entities based on the given entity sequence instead of all entities in KGs. Given the token sequence {w1, . . . , wn} and its corresponding entity sequence {e1, . . . , em}, we define the aligned entity distribution for the token wi as follows, p(ej|wi) = exp(linear(wo i ) · ej) Pm k=1 exp(linear(wo i ) · ek), (7) 1445 Mark Twain wrote The Million Pound Bank Note in 1893. Input for Common NLP tasks: mark twain wrote the [CLS] million pound bank note in 1893 Input for Entity Typing: wrote the [CLS] million pound bank note in 1893 [ENT] mark twain [ENT] Input for Relation Classification: wrote the [CLS] million pound bank note in 1893 [HD] mark twain [HD] [TL] [TL] . [SEP] . [SEP] . [SEP] Figure 3: Modifying the input sequence for the specific tasks. To align tokens among different types of input, we use dotted rectangles as placeholder. The colorful rectangles present the specific mark tokens. where linear(·) is a linear layer. Eq. 7 will be used to compute the cross-entropy loss function for dEA. Considering that there are some errors in tokenentity alignments, we perform the following operations for dEA: (1) In 5% of the time, for a given token-entity alignment, we replace the entity with another random entity, which aims to train our model to correct the errors that the token is aligned with a wrong entity; (2) In 15% of the time, we mask token-entity alignments, which aims to train our model to correct the errors that the entity alignment system does not extract all existing alignments; (3) In the rest of the time, we keep tokenentity alignments unchanged, which aims to encourage our model to integrate the entity information into token representations for better language understanding. Similar to BERT, ERNIE also adopts the masked language model (MLM) and the next sentence prediction (NSP) as pre-training tasks to enable ERNIE to capture lexical and syntactic information from tokens in text. More details of these pre-training tasks can be found from Devlin et al. (2019). The overall pre-training loss is the sum of the dEA, MLM and NSP loss. 3.5 Fine-tuning for Specific Tasks As shown in Figure 3, for various common NLP tasks, ERNIE can adopt the fine-tuning procedure similar to BERT. We can take the final output embedding of the first token, which corresponds to the special [CLS] token, as the representation of the input sequence for specific tasks. For some knowledge-driven tasks (e.g., relation classification and entity typing), we design special finetuning procedure: For relation classification, the task requires systems to classify relation labels of given entity pairs based on context. The most straightforward way to fine-tune ERNIE for relation classification is to apply the pooling layer to the final output embeddings of the given entity mentions, and represent the given entity pair with the concatenation of their mention embeddings for classification. In this paper, we design another method, which modifies the input token sequence by adding two mark tokens to highlight entity mentions. These extra mark tokens play a similar role like position embeddings in the conventional relation classification models (Zeng et al., 2015). Then, we also take the [CLS] token embedding for classification. Note that we design different tokens [HD] and [TL] for head entities and tail entities respectively. The specific fine-tuning procedure for entity typing is a simplified version of relation classification. As previous typing models make full use of both context embeddings and entity mention embeddings (Shimaoka et al., 2016; Yaghoobzadeh and Sch¨utze, 2017; Xin et al., 2018), we argue that the modified input sequence with the mention mark token [ENT] can guide ERNIE to combine both context information and entity mention information attentively. 4 Experiments In this section, we present the details of pretraining ERNIE and the fine-tuning results on five NLP datasets, which contain both knowledgedriven tasks and the common NLP tasks. 4.1 Pre-training Dataset The pre-training procedure primarily acts in accordance with the existing literature on pre-training language models. For the large cost of training ERNIE from scratch, we adopt the parameters of BERT released by Google3 to initialize the Transformer blocks for encoding tokens. Since pre3https://github.com/google-research/bert 1446 training is a multi-task procedure consisting of NSP, MLM, and dEA, we use English Wikipedia as our pre-training corpus and align text to Wikidata. After converting the corpus into the formatted data for pre-training, the annotated input has nearly 4, 500M subwords and 140M entities, and discards the sentences having less than 3 entities. Before pre-training ERNIE, we adopt the knowledge embeddings trained on Wikidata4 by TransE as the input embeddings for entities. To be specific, we sample part of Wikidata which contains 5, 040, 986 entities and 24, 267, 796 fact triples. The entity embeddings are fixed during training and the parameters of the entity encoding modules are all initialized randomly. 4.2 Parameter Settings and Training Details In this work, we denote the hidden dimension of token embeddings and entity embeddings as Hw, He respectively, and the number of self-attention heads as Aw, Ae respectively. In detail, we have the following model size: N = 6, M = 6, Hw = 768, He = 100, Aw = 12, Ae = 4. The total parameters are about 114M. The total amount of parameters of BERTBASE is about 110M, which means the knowledgeable module of ERNIE is much smaller than the language module and has little impact on the run-time performance. And, we only pre-train ERNIE on the annotated corpus for one epoch. To accelerate the training process, we reduce the max sequence length from 512 to 256 as the computation of selfattention is a quadratic function of the length. To keep the number of tokens in a batch as same as BERT, we double the batch size to 512. Except for setting the learning rate as 5e−5, we largely follow the pre-training hyper-parameters used in BERT. For fine-tuning, most hyper-parameters are the same as pre-training, except batch size, learning rate, and number of training epochs. We find the following ranges of possible values work well on the training datasets with gold annotations, i.e., batch size: 32, learning rate (Adam): 5e−5, 3e−5, 2e−5, number of epochs ranging from 3 to 10. We also evaluate ERNIE on the distantly supervised dataset, i.e., FIGER (Ling et al., 2015). As the powerful expression ability of deeply stacked Transformer blocks, we found small batch size would lead the model to overfit the training data. Hence, we use a larger batch size and less train4https://www.wikidata.org/ Dataset Train Develop Test Type FIGER 2,000,000 10,000 563 113 Open Entity 2,000 2,000 2,000 6 Table 1: The statistics of the entity typing datasets FIGER and Open Entity. Model Acc. Macro Micro NFGEC (Attentive) 54.53 74.76 71.58 NFGEC (LSTM) 55.60 75.15 71.73 BERT 52.04 75.16 71.63 ERNIE 57.19 76.51 73.39 Table 2: Results of various models on FIGER (%). ing epochs to avoid overfitting, and keep the range of learning rate unchanged, i.e., batch size: 2048, number of epochs: 2, 3. As most datasets do not have entity annotations, we use TAGME (Ferragina and Scaiella, 2010) to extract the entity mentions in the sentences and link them to their corresponding entities in KGs. 4.3 Entity Typing Given an entity mention and its context, entity typing requires systems to label the entity mention with its respective semantic types. To evaluate performance on this task, we fine-tune ERNIE on two well-established datasets FIGER (Ling et al., 2015) and Open Entity (Choi et al., 2018). The training set of FIGER is labeled with distant supervision, and its test set is annotated by human. Open Entity is a completely manually-annotated dataset. The statistics of these two datasets are shown in Table 1. We compare our model with the following baseline models for entity typing: NFGEC. NFGEC is a hybrid model proposed by Shimaoka et al. (2016). NFGEC combines the representations of entity mention, context and extra hand-craft features as input, and is the stateof-the-art model on FIGER. As this paper focuses on comparing the general language representation abilities of various neural models, we thus do not use the hand-craft features in this work. UFET. For Open Entity, we add a new hybrid model UFET (Choi et al., 2018) for comparison. UFET is proposed with the Open Entity dataset, which uses a Bi-LSTM for context representation instead of two Bi-LSTMs separated by entity mentions in NFGEC. Besides NFGEC and UFET, we also report the result of fine-tuning BERT with the same input format introduced in Section 3.5 for fair com1447 Model P R F1 NFGEC (LSTM) 68.80 53.30 60.10 UFET 77.40 60.60 68.00 BERT 76.37 70.96 73.56 ERNIE 78.42 72.90 75.56 Table 3: Results of various models on Open Entity (%). Dataset Train Develop Test Relation FewRel 8,000 16,000 16,000 80 TACRED 68,124 22,631 15,509 42 Table 4: The statistics of the relation classification datasets FewRel and TACRED. parison. Following the same evaluation criteria used in the previous work, we compare NFGEC, BERT, ERNIE on FIGER, and adopt strict accuracy, loose macro, loose micro scores for evaluation. We compare NFGEC, BERT, UFET, ERNIE on Open Entity, and adopt precision, recall, microF1 scores for evaluation. The results on FIGER are shown in Table 2. From the results, we observe that: (1) BERT achieves comparable results with NFGEC on the macro and micro metrics. However, BERT has lower accuracy than the best NFGEC model. As strict accuracy is the ratio of instances whose predictions are identical to human annotations, it illustrates some wrong labels from distant supervision are learned by BERT due to its powerful fitting ability. (2) Compared with BERT, ERNIE significantly improves the strict accuracy, indicating the external knowledge regularizes ERNIE to avoid fitting the noisy labels and accordingly benefits entity typing. The results on Open Entity are shown in Table 3. From the table, we observe that: (1) BERT and ERNIE achieve much higher recall scores than the previous entity typing models, which means pre-training language models make full use of both the unsupervised pre-training and manuallyannotated training data for better entity typing. (2) Compared to BERT, ERNIE improves the precision by 2% and the recall by 2%, which means the informative entities help ERNIE predict the labels more precisely. In summary, ERNIE effectively reduces the noisy label challenge in FIGER, which is a widely-used distantly supervised entity typing dataset, by injecting the information from KGs. Besides, ERNIE also outperforms the baselines on Open Entity which has gold annotations. Model FewRel TACRED P R F1 P R F1 CNN 69.51 69.64 69.35 70.30 54.20 61.20 PA-LSTM 65.70 64.50 65.10 C-GCN 69.90 63.30 66.40 BERT 85.05 85.11 84.89 67.23 64.81 66.00 ERNIE 88.49 88.44 88.32 69.97 66.08 67.97 Table 5: Results of various models on FewRel and TACRED (%). 4.4 Relation Classification Relation classification aims to determine the correct relation between two entities in a given sentence, which is an important knowledge-driven NLP task. To evaluate performance on this task, we fine-tune ERNIE on two well-established datasets FewRel (Han et al., 2018c) and TACRED (Zhang et al., 2017). The statistics of these two datasets are shown in Table 4. As the original experimental setting of FewRel is few-shot learning, we rearrange the FewRel dataset for the common relation classification setting. Specifically, we sample 100 instances from each class for the training set, and sample 200 instances for the development and test respectively. There are 80 classes in FewRel, and there are 42 classes (including a special relation “no relation”) in TACRED. We compare our model with the following baseline models for relation classification: CNN. With a convolution layer, a max-pooling layer, and a non-linear activation layer, CNN gets the output sentence embedding, and then feeds it into a relation classifier. To better capture the position of head and tail entities, position embeddings are introduced into CNN (Zeng et al., 2015; Lin et al., 2016; Wu et al., 2017; Han et al., 2018b). PA-LSTM. Zhang et al. (2017) propose PALSTM introducing a position-aware attention mechanism over an LSTM network, which evaluates the relative contribution of each word in the sequence for the final sentence representation. C-GCN. Zhang et al. (2018) adopt the graph convolution operations to model dependency trees for relation classification. To encode the word order and reduce the side effect of errors in dependency parsing, Contextualized GCN (C-GCN) firstly uses Bi-LSTM to generate contextualized representations as input for GCN models. In addition to these three baselines, we also finetune BERT with the same input format introduced in Section 3.5 for fair comparison. 1448 Model MNLI-(m/mm) QQP QNLI SST-2 392k 363k 104k 67k BERTBASE 84.6/83.4 71.2 93.5 ERNIE 84.0/83.2 71.2 91.3 93.5 Model CoLA STS-B MRPC RTE 8.5k 5.7k 3.5k 2.5k BERTBASE 52.1 85.8 88.9 66.4 ERNIE 52.3 83.2 88.2 68.8 Table 6: Results of BERT and ERNIE on different tasks of GLUE (%). As FewRel does not have any null instance where there is not any relation between entities, we adopt macro averaged metrics to present the model performances. Since FewRel is built by checking whether the sentences contain facts in Wikidata, we drop the related facts in KGs before pre-training for fair comparison. From Table 5, we have two observations: (1) As the training data does not have enough instances to train the CNN encoder from scratch, CNN just achieves an F1 score of 69.35%. However, the pre-training models including BERT and ERNIE increase the F1 score by at least 15%. (2) ERNIE achieves an absolute F1 increase of 3.4% over BERT, which means fusing external knowledge is very effective. In TACRED, there are nearly 80% null instances so that we follow the previous work (Zhang et al., 2017) to adopt micro averaged metrics to represent the model performances instead of the macro. The results of CNN, PA-LSTM, and C-GCN come from the paper by Zhang et al. (2018), which are the best results of CNN, RNN, and GCN respectively. From Table 5, we observe that: (1) The C-GCN model outperforms the strong BERT model by an F1 increase of 0.4%, as C-GCN utilizes the dependency trees and the entity mask strategy. The entity mask strategy refers to replacing each subject (and object similarly) entity with a special NER token, which is similar to our proposed pre-training task dEA. (2) ERNIE achieves the best recall and F1 scores, and increases the F1 of BERT by nearly 2.0%, which proves the effectiveness of the knowledgeable module for relation classification. In conclusion, we find that the pre-trained language models can provide more information for relation classification than the vanilla encoder CNN and RNN. And ERNIE outperforms BERT on both of the relation classification datasets, especially on the FewRel which has a much smaller Model P R F1 BERT 85.05 85.11 84.89 ERNIE 88.49 88.44 88.32 w/o entities 85.89 85.89 85.79 w/o dEA 85.85 85.75 85.62 Table 7: Ablation study on FewRel (%). training set. It demonstrates extra knowledge helps the model make full use of small training data, which is important for most NLP tasks as large-scale annotated data is unavailable. 4.5 GLUE The General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018) is a collection of diverse natural language understanding tasks (Warstadt et al., 2018; Socher et al., 2013; Dolan and Brockett, 2005; Agirre et al., 2007; Williams et al., 2018; Rajpurkar et al., 2016; Dagan et al., 2006; Levesque et al., 2011), which is the main benchmark used in Devlin et al. (2019). To explore whether our knowledgeable module degenerates the performance on common NLP tasks, we evaluate ERNIE on 8 datasets of GLUE and compare it with BERT. In Table 6, we report the results of our evaluation submissions and those of BERT from the leaderboard. We notice that ERNIE is consistent with BERTBASE on big datasets like MNLI, QQP, QNLI, and SST-2. The results become more unstable on small datasets, that is, ERNIE is better on CoLA and RTE, but worse on STS-B and MRPC. In short, ERNIE achieves comparable results with BERTBASE on GLUE. On the one hand, it means GLUE does not require external knowledge for language representation. On the other hand, it illustrates ERNIE does not lose the textual information after heterogeneous information fusion. 4.6 Ablation Study In this subsection, we explore the effects of the informative entities and the knowledgeable pretraining task (dEA) for ERNIE using FewRel dataset. w/o entities and w/o dEA refer to finetuning ERNIE without entity sequence input and the pre-training task dEA respectively. As shown in Table 7, we have the following observations: (1) Without entity sequence input, dEA still injects knowledge information into language representation during pre-training, which increases the F1 score of BERT by 0.9%. (2) Although the informative entities bring much knowledge informa1449 tion which intuitively benefits relation classification, ERNIE without dEA takes little advantage of this, leading to the F1 increase of 0.7%. 5 Conclusion In this paper, we propose ERNIE to incorporate knowledge information into language representation models. Accordingly, we propose the knowledgeable aggregator and the pre-training task dEA for better fusion of heterogeneous information from both text and KGs. The experimental results demonstrate that ERNIE has better abilities of both denoising distantly supervised data and fine-tuning on limited data than BERT. There are three important directions remain for future research: (1) inject knowledge into feature-based pre-training models such as ELMo (Peters et al., 2018); (2) introduce diverse structured knowledge into language representation models such as ConceptNet (Speer and Havasi, 2012) which is different from the world knowledge database Wikidata; (3) annotate more real-world corpora heuristically for building larger pre-training data. These directions may lead to more general and effective language understanding. Acknowledgement This work is funded by the Natural Science Foundation of China (NSFC) and the German Research Foundation (DFG) in Project Crossmodal Learning, NSFC 61621136008 / DFG TRR-169, the National Natural Science Foundation of China (NSFC No. 61572273) and China Association for Science and Technology (2016QNRC001). References Eneko Agirre, Llu’is M‘arquez, and Richard Wicentowski. 2007. Proceedings of the fourth international workshop on semantic evaluations (semeval2007). In Proceedings of SemEval-2007. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Proceedings of NIPS, pages 2787–2795. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of EMNLP, pages 632–642. Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Chengjiang Li, Xu Chen, and Tiansi Dong. 2018. Joint representation learning of cross-lingual words and entities via attentive distant supervision. In Proceedings of EMNLP, pages 227–237. Yixin Cao, Lifu Huang, Heng Ji, Xu Chen, and Juanzi Li. 2017. Bridge text and knowledge by learning multi-prototype entity mention embedding. In Proceedings of ACL, pages 1623–1633. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowledge. In Proceedings of ACL, pages 2406–2417. Eunsol Choi, Omer Levy, Yejin Choi, and Luke Zettlemoyer. 2018. Ultra-fine entity typing. In Proceedings of ACL, pages 87–96. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML, pages 160–167. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Proceedings of MLCW, pages 177– 190. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Proceedings of NIPS, pages 3079–3087. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT. William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of IWP. Paolo Ferragina and Ugo Scaiella. 2010. Tagme: on-the-fly annotation of short text fragments (by wikipedia entities). In Proceedings of CIKM, pages 1625–1628. Xu Han, Zhiyuan Liu, and Maosong Sun. 2016. Joint representation learning of text and knowledge for knowledge graph completion. arXiv preprint arXiv:1611.04125. Xu Han, Zhiyuan Liu, and Maosong Sun. 2018a. Neural knowledge acquisition via mutual attention between knowledge graph and text. In Proceedings of AAAI. Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, and Peng Li. 2018b. Hierarchical relation extraction with coarse-to-fine grained attention. In Proceedings of EMNLP, pages 2236–2245. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018c. Fewrel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceedings of EMNLP, pages 4803–4809. 1450 Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of ACL, pages 328–339. Hector J Levesque, Ernest Davis, and Leora Morgenstern. 2011. The Winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, volume 46, page 47. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of ACL, volume 1, pages 2124–2133. Xiao Ling, Sameer Singh, and Daniel S Weld. 2015. Design challenges for entity linking. TACL, 3:315– 328. Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In Proceedings of ACL, pages 1468–1478. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing lstm language models. In Proceedings of ICLR. Todor Mihaylov and Anette Frank. 2018. Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. In Proceedings of ACL, pages 821–832. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111–3119. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP, pages 1532–1543. Matthew Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. In Proceedings of ACL, pages 1756–1765. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227–2237. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. In Proceedings of Technical report, OpenAI. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of EMNLP, pages 2383–2392. Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of NAACL-HLT, pages 142–147. Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2016. An attentive neural architecture for fine-grained entity type classification. In Proceedings of AKBC, pages 69–74. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP, pages 1631–1642. Robert Speer and Catherine Havasi. 2012. Representing general relational knowledge in conceptnet 5. In Proceedings of LREC, pages 3679–3686. Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of EMNLP, pages 1499–1509. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of ACL, pages 384–394. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS, pages 5998– 6008. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings of ICML, pages 1096–1103. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of EMNLP, pages 353–355. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph and text jointly embedding. In Proceedings of EMNLP, pages 1591– 1601. Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2018. Neural network acceptability judgments. arXiv preprint 1805.12471. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of NAACL-HLT, pages 1112–1122. 1451 Yi Wu, David Bamman, and Stuart Russell. 2017. Adversarial training for relation extraction. In Proceedings of EMNLP, pages 1778–1783. Ji Xin, Hao Zhu, Xu Han, Zhiyuan Liu, and Maosong Sun. 2018. Put it back: Entity typing with language model enhancement. In Proceedings of EMNLPs, pages 993–998. Yadollah Yaghoobzadeh and Hinrich Sch¨utze. 2017. Multi-level representations for fine-grained typing of knowledge base entities. In Proceedings of EACL, pages 578–589. Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the embedding of words and entities for named entity disambiguation. In Proceedings of CoNLL, pages 250– 259. Poorya Zaremoodi, Wray Buntine, and Gholamreza Haffari. 2018. Adaptive knowledge sharing in multi-task learning: Improving low-resource neural machine translation. In Proceedings of ACL, pages 656–661. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of EMNLP, pages 93–104. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of EMNLP, pages 1753–1762. Yuhao Zhang, Peng Qi, and Christopher D Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of EMNLP, pages 2205–2215. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D Manning. 2017. Positionaware attention and supervised data improve slot filling. In Proceedings of EMNLP, pages 35–45. Wanjun Zhong, Duyu Tang, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2018. Improving question answering by commonsense-based pre-training. arXiv preprint arXiv:1809.03568.
2019
139
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 140–150 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 140 A Joint Named-Entity Recognizer for Heterogeneous Tag-sets Using a Tag Hierarchy Genady Beryozkin, Yoel Drori, Oren Gilon, Tzvika Hartman and Idan Szpektor Google Research Tel Aviv, Israel {genady,dyoel,ogilon,tzvika,szpektor} @google.com Abstract We study a variant of domain adaptation for named-entity recognition where multiple, heterogeneously tagged training sets are available. Furthermore, the test tag-set is not identical to any individual training tag-set. Yet, the relations between all tags are provided in a tag hierarchy, covering the test tags as a combination of training tags. This setting occurs when various datasets are created using different annotation schemes. This is also the case of extending a tag-set with a new tag by annotating only the new tag in a new dataset. We propose to use the given tag hierarchy to jointly learn a neural network that shares its tagging layer among all tag-sets. We compare this model to combining independent models and to a model based on the multitasking approach. Our experiments show the benefit of the tag-hierarchy model, especially when facing non-trivial consolidation of tag-sets. 1 Introduction Named Entity Recognition (NER) has seen significant progress in the last couple of years with the application of Neural Networks to the task. Such models achieve state-of-the-art performance with little or no manual feature engineering (Collobert et al., 2011; Huang et al., 2015; Lample et al., 2016; Ma and Hovy, 2016; Dernoncourt et al., 2017). Following this success, more complex NER setups are approached with neural models, among them domain adaptation (Qu et al., 2016; He and Sun, 2017; Dong et al., 2017). In this work we study one type of domain adaptation for NER, denoted here heterogeneous tagsets. In this variant, samples from the test set are not available at training time. Furthermore, the test tag-set differs from each training tag-set. However every test tag can be represented either as a single training tag or as a combination of several training tags. This information is given in the form of a hypernym hierarchy over all tags, training and test (see Fig. 1). This setting arises when different schemes are used for annotating multiple datasets for the same task. This often occurs in the medical domain, where healthcare providers use customized tagsets to create their own private test sets (Shickel et al., 2017; Lee et al., 2018). Another scenario is selective annotation, as in the case of extending an existing tag-set, e.g. {‘Name’, ‘Location’}, with another tag, e.g. ‘Date’. To save annotation effort, new training data is labeled only with the new tag. This case of disjoint tag-sets is also discussed in the work of Greenberg et al. (2018). A similar case is extending a training-set with new examples in which only rare tags are annotated. In domains where training data is scarce, out-ofdomain datasets annotated with infrequent tags may be very valuable. A naive approach concatenates all trainingsets, ignoring the differences between the tagging schemes in each example. A different approach would be to learn to tag with multiple training tagsets. Then, in a post-processing step, the predictions from the different tag-sets need to be consolidated into a single test tag sequence, resolving tagging differences along the way. We study two such models. The first model learns an independent NER model for each training tag-set. The second model applies the multitasking (MTL) (Collobert et al., 2011; Ruder, 2017) paradigm, in which a shared latent representation of the input text is fed into separate tagging layers. The above models require heuristic postprocessing to consolidate the different predicted tag sequences. To overcome this limitation, we propose a model that incorporates the given tag hierarchy within the neural NER model. Specifically, this model learns to predict a tag sequence only over the fine-grained tags in the hierarchy. 141 Tag-set 1 (T1): Name, Street, City, Hospital, Age>90 Tag-set 2 (T2): First Name, Last Name, Address, Age Tag-set 3 (T3): Name, Location, Date First Name Last Name Street City Hospital Age>90 Date Address Location Name Age Figure 1: A tag hierarchy for three tag-sets. At training time, gradients on each dataset-specific labeled examples are propagated as gradients on plausible fine-grained tags. At inference time the model predicts a single sequence of fine-grained tags, which are then mapped to the test tag-set by traversing the tag hierarchy. Importantly, all tagging decisions are performed in the model without the need for a post-processing consolidation step. We conducted two experiments. The first evaluated the extension of a tag-set with a new tag via selective annotation of a new dataset with only the extending tag, using datasets from the medical and news domains. In the second experiment we integrated two full tag-sets from the medical domain with their training data while evaluating on a third test tag-set. The results show that the model which incorporates the tag-hierarchy is more robust compared to a combination of independent models or MTL, and typically outperforms them. This is especially evident when many tagging collisions need to be settled at post-processing. In these cases, the performance gap in favor of the tag-hierarchy model is large. 2 Background and Definitions 2.1 Task Definition The goal in the heterogeneous tag-sets domain adaptation task is to learn an NER model M that given an input token sequence x = {xi}n 1 infers a tag sequence y = {yi}n 1 = M(x) over a test tag-set T s, ∀i yi∈T s. To learn the model, K training datasets {DSr k}K k=1 are provided, each labeled with its own tag-set T r k . Superscripts ’s’ and ’r’ stand for ’test’ and ’training’, respectively. In this task, no training tag-set is identical to the test tagset T s by itself. However, all tags in T s can be covered by combining the training tag-sets {T r k }K k=1. This information is provided in the form of a directed acyclic graph (DAG) representing hyperFigure 2: Neural architecture for NER. nymy relations between all training and test tags. Fig. 1 illustrates such a hierarchy. As mentioned above, an example scenario is selective annotation, in which an original tag-set is extended with a new tag t, each with its own training data, and the test tag-set is their union. But, some setups require combinations other than a simple union, e.g. covering the test tag ‘Address’ with the finer training tags ‘Street’ and ‘City’, each from a different tag-set. This task is different from inductive domain adaptation (Pan and Yang, 2010; Ruder, 2017), in which the tag-sets are different but the tasks differ as well (e.g. NER and parsing), with no need to map the outcomes to a single tag-set at test time. 2.2 Neural network for NER As the underlying architecture shared by all models in this paper, we follow the neural network proposed by Lample et al. (2016), which achieved state-of-the-art results on NER. In this model, depicted in Fig. 2, each input token xi is represented as a combination of: (a) a one-hot vector xw i , mapping the input to a fixed word vocabulary, and (b) a sequence of one-hot vectors {xc i,j}ni j=1, representing the input word’s character sequence. Each input token xi is first embedded in latent space by applying both a word-embedding matrix, wei = E xw i , and a character-based embedding layer cei = CharBiRNN({xc i,j}) (Ling et al., 2015). This output of this step is ei = cei ⊕wei, where ⊕stands for vector concatenation. Then, the embedding vector sequence {ei}n 1 142 Figure 3: NER multitasking architecture for 3 tag-sets. is re-encoded in context using a bidirectional RNN layer {ri}n 1 = BiRNN({ei}n 1) (Schuster and Paliwal, 1997). The sequence {ri}n 1 constitutes the latent representation of the input text. Finally, each re-encoded vector ri is projected to tag space for the target tag-set T, ti = P ri, where |ti| = |T|. The sequence {ti}n 1 is then taken as input to a CRF layer (Lafferty et al., 2001), which maintains a global tag transition matrix. At inference time, the model output is y = M(x), the most probable CRF tag sequence for input x. 3 Models for Multiple Tagging Layers One way to learn a model for the heterogeneous tag-sets setting is to train a base NER (Sec. 2.2) on the concatenation of all training-sets, predicting tags from the union of all training tag-sets. In our experiments, this model under performed, due to the fact that it treats each training example as fully tagged despite being tagged only with the tags belonging to the training-set from which the example is taken (see Sec. 6). We next present two models that instead learn to tag each training tag-set separately. In the first model the outputs from independent base models, each trained on a different tag-set, are merged. The second model utilizes the the multitasking approach to train separate tagging layers that share a single text representation layer. 3.1 Combining independent models In this model, we train a separate NER model for each training set, resulting in K models {Mk}K k=1. At test time, each model predicts a sequence yk = Mk(x) over the corresponding tag-set T r k . The sequences {yk}K k=1 are consolidated into a single sequence ys over the test tag-set T s. We perform this consolidation in a postprocessing step. First, each predicted tag yk,i is mapped to the test tag-set as ys k,i. We employ the provided tag hierarchy for this mapping by traversing it starting from yk,i until a test tag is reached. Then, for every token xi, we consider the test tags predicted at position i by the different models M(xi) = {ys k,i|ys k,i ̸= ‘Other’}. Cases where M(xi) contains more than one tag are called collisions. Models must consolidate collisions, selecting a single predicted tag for xi. We introduce three different consolidation methods. The first is to randomly select a tag from M(xi). The second chooses the tag that originates from the tag sequence yk with the highest CRF probability score. The third computes the marginal CRF tag probability for each tag and selects the one with the highest probability. 3.2 Multitasking for heterogeneous tag-sets Lately, several works explored using multitasking (MTL) for inductive transfer learning within a neural architecture (Collobert and Weston, 2008; Chen et al., 2016; Peng and Dredze, 2017). Such algorithms jointly train a single model to solve different NLP tasks, such as NER, sentiment analysis and text classification. The various tasks share the same text representation layer in the model but maintain a separate tagging layer per task. We adapt multitasking to heterogeneous tagsets by considering each training dataset, which has a different tag-set T r k , as a separate NER task. Thus, a single model is trained, in which the latent text representation {ri}n 1 (see Sec. 2.2) is shared between NER tasks. As mentioned above, the tagging layers (projection and CRF) are kept separate for each tag-set. Fig. 3 illustrates this architecture. We emphasize that the output of the MTL model still consists of {yk}K k=1 different tag sequence predictions. They are consolidated into a final single sequence ys using the same post-processing step described in Sec. 3.1. 4 Tag Hierarchy Model The models introduced in Sec. 3.1 and 3.2 learn to predict a tag sequence for each training tagset separately and they do not share parameters between tagging layers. In addition, they require 143 Tag-set 1 (T1): Name, Street, City, Hospital, Age>90, T1-Other Tag-set 2 (T2): First Name, Last Name, Address, Age, T2-Other Tag-set 3 (T3): Name, Location, Date, T3-Other First Name Last Name Street City Hospital Age>90 AgeOther LocationOther Date Address Location Name Age T1-Other T2-Other T3-Other FGOther AddressOther NameOther Figure 4: The tag hierarchy in Fig. 1 for three tag-sets after closure extension. Green nodes and edges were automatically added in this process. Fine-grained tags are surrounded by a dotted box. a post-processing step, outside of the model, for merging the tag sequences inferred for the different tag-sets. A simple concatenation of all training data is also not enough to accommodate the differences between the tag-sets within the model (see Sec. 3). Moreover, none of these models utilizes the relations between tags, which are provided as input in the form of a tag hierarchy. In this section, we propose a model that addresses these limitations. This model utilizes the given tag hierarchy at training time to learn a single, shared tagging layer that predicts only finegrained tags. The hierarchy is then used during inference to map fine-grained tags onto a target tag-set. Consequently, all tagging decisions are made in the model, without the need for a postprocessing step. 4.1 Notations In the input hierarchy DAG, each node represents some semantic role of words in sentences, (e.g. ‘Name’). A directed edge c →d implies that c is a hyponym of d, meaning c captures a subset of the semantics of d. Examples include ‘LastName’ →‘Name’, and ‘Street’ → ‘Location’ in Fig. 1. We denote the set of all tags that capture some subset of semantics of d by Sem(d) = {d} ∪{c|c R−→d}, where R−→indicates that there is a directed path from c to d in the graph. For example, Sem(Name) = {Name, LastName, FirstName}. If a node d has no hyponyms (Sem(d) = {d}), it represents some fine-grained tag semantics. We denote the set of all fine-grained tags by T FG. We also denote all fine-grained tags that are hyponyms of d by Fine(d) = T FG ∩Sem(d), e.g. Fine(Name) = {LastName, FirstName}. As mentioned above, our hierarchical model predicts tag sequences only from T FG and then maps them onto a target tag-set. 4.2 Hierarchy extension with ‘Other’ tags For each tag d we would like the semantics captured by the union of semantics of all tags in Fine(d) to be exactly the semantics of d, making sure we will not miss any aspect of d when predicting only over T FG. Yet, this semantics-equality property does not hold in general. One such example in Fig. 4 is ‘Age>90’→‘Age’, because there may be age mentions below 90 annotated in T2’s dataset. To fix the semantics-equality above, we use the notion of the ‘Other’ tag in NER, which has the semantics of “all the rest”. Specifically, for every d /∈T FG, a fine-grained tag ‘d-Other’ ∈T FG and an edge ‘d-Other’→‘d’ are automatically added to the graph, hence ‘d-Other’∈Fine(d). For instance, ‘Age-Other’→‘Age’. These new tags represent the aspects of d not captured by the other tags in Fine(d). Next a tag ‘Ti-Other’ is automatically added to each tag-set Ti, explicitly representing the “all the rest” semantics of Ti. The labels for ‘Ti-Other’ are induced automatically from unlabeled tokens in the original DSr i dataset. To make sure that the semantics-equality property above also holds for ‘Ti-Other’, a fine-grained tag ‘FG-Other’ is also added, which captures the “all the rest” semantics at the fine-grained level. Then, each ‘Ti-Other’ is connected to all fine-grained tags that do not capture some semantics of the tags in Ti, defining: Fine(Ti-Other) = T FG \ [ d∈Ti∖{Ti-Other} Sem(d) This mapping is important at training time, where ‘Ti-Other’ labels are used as distant supervision over their related fine-grained tags (Sec. 4.3). Fig. 4 depicts our hierarchy example after this step. We emphasize that all extensions in this step are done automatically as part of the model’s algorithm. 4.3 NER model with tag hierarchy One outcome of the extension step is that the set of fine-grained tags T FG covers all distinct finegrained semantics across all tag-sets. In the following, we train a single NER model (Sec. 2.2) that predicts sequences of tags from the T FG tagset. As there is only one tagging layer, model parameters are shared across all training examples. At inference time, this model predicts the most likely fine-grained tag sequence yfg for the input 144 x. As the model outputs only a single sequence, post-processing consolidation is not needed. The tag hierarchy is used to map each predicated finegrained tag yfg i to a tag in a test tag-set T s by traversing the out-edges of yfg i until a tag in T s is reached. This procedure is also used in the baseline models (see Sec. 3.1) for mapping their predictions onto the test tag-set. However, unlike the baselines, which end with multiple candidate predictions in the test tag-set and need to consolidate between them, here, only a single fine-grained tag sequence is mapped, so no further consolidation is needed. At training time, each example x that belongs to some training dataset DSr i is labeled with a gold-standard tag sequence y where the tags are taken only from the corresponding tag-set T r i . This means that tags {yi} are not necessarily finegrained tags, so there is no direct supervision for predicting fine-grained tag sequences. However, each gold label yi provides distant supervision over its related fine-grained tags, Fine(yi). It indicates that one of them is the correct fine-grained label without explicitly stating which one, so we consider all possibilities in a probabilistic manner. Henceforth, we say that a fine-grained tag sequence yfg agrees with y if ∀i yfg i ∈Fine(yi), i.e. yfg is a plausible interpretation for y at the fine-grained tag level. For example, following Fig. 4, sequences [‘Hospital’, ‘City’] and [‘Street’, ‘City’] agree with [‘Location’, ‘Location’], unlike [‘City’, ‘Last Name’]. We denote all fine-grained tag sequences that agree with y by AgreeWith(y). Using this definition, the tag-hierarchy model is trained with the loss function: loss(y) = −log(Zy Z ) (1) Zy = X yfg∈AgreeWith(y) φ(yfg) (2) Z = X yfg φ(yfg) (3) where φ(y) stands for the model’s score for sequence y, viewed as unnormalized probability. Z is the standard CRF partition function over all possible fine-grained tag sequences. Zy, on the other hand, accumulates scores only of fine-grained tag sequences that agree with y. Thus, this loss function aims at increasing the summed probability of all fine-grained sequences agreeing with y. Both Zy and Z can be computed efficiently using Dataset Tag-set # Tokens Tagged (%) Size Tokens I2B2’06 (train) 7 387,126 4.6 I2B2’06 (test) 163,488 4.2 I2B2’14 (train) 17 336,422 4.4 I2B2’14 (dev) 152,895 5.0 I2B2’14 (test) 316,212 4.6 Physio (test) 6 335,383 0.7 Conll (train) 4 203,621 16.7 Conll (dev) 51,362 16.7 Conll (test) 46,435 18.1 Onto (train) 18 1,304,491 13.1 Onto (test) 162,971 14.2 Table 1: Dataset statistics. Tokens tagged refer to percentage of tokens tagged not as ‘Other’. the Forward-Backward algorithm (Lafferty et al., 2001). We note that we also considered finding the most likely tag sequence over a test tag-set at inference time by summing the probabilities of all finegrained tag sequences that agree with each candidate sequence y: maxy P yfg∈AgreeWith(y) φ(yfg). However, this problem is NP-hard (Lyngsø and Pedersen, 2002). We plan to explore other alternatives in future work. 5 Experimental Settings To test the tag-hierarchy model under heterogeneous tag-set scenarios, we conducted experiments using datasets from two domains. We next describe these datasets as well as implementation details for the tested models. Sec. 6 then details the experiments and their results. 5.1 Datasets Five datasets from two domains, medical and news, were used in our experiments. Table 1 summarizes their main statistics. For the medical domain we used the datasets I2B2-2006 (denoted I2B2’06) (Uzuner et al., 2007), I2B2-2014 (denoted I2B2’14) (Stubbs and Uzuner, 2015) and the PhysioNet golden set (denoted Physio) (Goldberger et al., 2000). These datasets are all annotated for the NER task of deidentification (a.k.a text anonymization) (Dernoncourt et al., 2017). Still, as seen in Table 1, each dataset is annotated with a different tag-set. Both I2B2’06 and I2B2’14 include train and test sets, while Physio contains only a test set. For the news domain we used the English part of CONLL-2003 (denoted Conll) (Tjong Kim Sang and De Meulder, 2003) and OntoNotes-v5 (denoted Onto) (Weischedel et al., 2013), both with train and test sets. We note that I2B2’14, Conll 145 and Onto also contain a dev-set, which is used for hyper-param tuning (see below). In all experiments, each example is a full document. Each document is split into tokens on whitespaces and punctuation. A tag-hierarchy covering the 57 tags from all five datasets was given as input to all models in all experiments. We constructed this hierarchy manually. The only non-trivial tag was ‘Location’, which in I2B2’14 is split into finer tags (‘City’, ‘Street’ etc.) and includes also hospital mentions in Conll and Onto. We resolved these relations similarly to the graph in Figure 1. 5.2 Compared Models Four models were compared in our experiments: MConcat A single NER model on the concatenation of datasets and tag-sets (Sec. 3). MIndep Combining predictions of independent NER models, one per tag-set (Sec. 3.1). MMTL Multitasking over training tag-sets (Sec. 3.2). MHier A tag hierarchy employed within a single base model (Sec. 4). All models are based on the neural network described in Sec. 2.2. We tuned the hyper-params in the base model to achieve state-of-the-art results for a single NER model on Conll and I2B2’14 when trained and tested on the same dataset (Strubell et al., 2017; Dernoncourt et al., 2017) (see Table 2). This is done to maintain a constant baseline, and is also due to the fact that I2B2’06 does not have a standard dev-set. We tuned hyper-params over the dev-sets of Conll and I2B2’14. For character-based embedding we used a single bidirectional LSTM (Hochreiter and Schmidhuber, 1997) with hidden state size of 25. For word embeddings we used pre-trained GloVe embeddings1 (Pennington et al., 2014), without further training. For token recoding we used a two-level stacked bidirectional LSTM (Graves et al., 2013) with both output and hidden state of size 100. Once these hyper-params were set, no further tuning was made in our experiments, which means all models for heterogeneous tag-sets were tested under the above fixed hyper-param set. In each experiment, each model was trained until convergence on the respective training set. 1nlp.stanford.edu/data/glove.6B.zip I2B2’06 I2B2’14 Conll Onto Micro avg. F1 0.894 0.960 0.926 0.896 Table 2: F1 for training and testing a single base NER model on the same dataset. Tag Frequency in training / test (%) I2B2’06 I2B2’14 Conll Onto Name 1.4 / 1.3 1.0 / 1.0 4.3 / 4.9 3.1 / 2.9 Date 1.7 / 1.5 2.4 / 2.5 0 / 0 2.7 / 3.1 Location 0.1 / 0.1 0.2 / 0.3 3.2 / 3.4 2.7 / 3.2 Hospital 0.6 / 0.7 0.3 / 0.3 0 / 0 0 / 0 Table 3: Occurrence statistics for tags used in the tagset extension experiment, reported as % out of all tokens in the training and test sets of each dataset. 6 Experiments and Results We performed two experiments. The first refers to selective annotation, in which an existing tag-set is extended with a new tag by annotating a new dataset only with the new tag. The second experiment tests the ability of each model to integrate two full tag-sets. In all experiments we assess model performance via micro-averaged tag F1, in accordance with CoNLL evaluation (Tjong Kim Sang and De Meulder, 2003). Statistical significance was computed using the Wilcoxon two-sided signed ranks test at p = 0.01 (Wilcoxon, 1945). We next detail each experiment and its results. In all our experiments, we found the performance of the different consolidation methods (Sec. 3.1) to be on par. One reason that using model scores does not beat random selection may be due to the overconfidence of the tagging models – their prediction probabilities are close to 0 or 1. We report figures for random selection as representative of all consolidation methods. 6.1 Tag-set extension experiment In this experiment, we considered the 4 most frequent tags that occur in at least two of our datasets: ‘Name’, ‘Date’, ‘Location’ and ‘Hospital’ (Table 3 summarizes their statistics). For each frequent tag t and an ordered pair of datasets in which t occurs, we constructed new training sets by removing t from the first training set (termed base dataset) and remove all tags but t from the second training set (termed extending dataset). For example, for the triplet of { ‘Name’, I2B2’14, I2B2’06}, we constructed a version of I2B2’14 without ‘Name’ annotations and a version of I2B2’06 containing only annotations for ‘Name’. This process yielded 32 such triplets. 146 F1 AVERAGE Model Extending Tag Base Dataset Hier Indep MTL Date I2B2’14 0.806 0.795 0.787 I2B2’06 0.756 0.761 0.787 Onto 0.835 0.828 0.819 Date Total 0.799 0.795 0.798 Hospital I2B2’14 0.931 0.941 0.918 I2B2’06 0.867 0.866 0.853 Hospital Total 0.899 0.904 0.885 Location Conll 0.801 0.784 0.793 I2B2’14 0.953 0.913 0.905 I2B2’06 0.877 0.848 0.820 Onto 0.785 0.694 0.692 Location Total 0.854 0.810 0.802 Name Conll 0.847 0.759 0.729 I2B2’14 0.918 0.880 0.902 I2B2’06 0.740 0.743 0.729 Onto 0.878 0.862 0.862 Name Total 0.846 0.811 0.806 Grand Total 0.854 0.823 0.816 Table 4: F1 in the tag-set extension experiment, averaged over extending datasets for every base dataset. For every triplet, we train all tested models on the two modified training sets and test them on the test-set of the base dataset (I2B2’14 in the example above). Each test-set was not altered and contains all tags of the base tag-set, including t. MConcat performed poorly in this experiment. For example, on the dataset extending I2B2’14 with ‘Name’ from I2B2’06, MConcat tagged only one ‘Name’ out of over 4000 ‘Name’ mentions in the test set. Given this, we do not provide further details of the results of MConcat in this experiment. For the three models tested, this experiment yields 96 results. The main results2 of this experiment are shown in Table 4. Surprisingly, in more tests MIndep outperformed MMTL than vice versa, adding to prior observations that multitasking can hurt performance instead of improving it (Bingel and Søgaard, 2017; Alonso and Plank, 2017; Bjerva, 2017). But, applying a shared tagging layer on top of a shared text representation boosts the model’s capability and stability. Indeed, overall, MHier outperforms the other models in most tests, and in the rest it is similar to the best performing model. Analyzing the results, we noticed that the gap between model performance increases when more collisions are encountered for MMTL and MIndep at post-processing time (see Sec. 3.1). The amount of collisions may be viewed as a predictor for the baselines’ difficulty to handle a specific heterogeneous tag-sets setting. Table 5 presents the tests in which more than 100 collisions were detected for either MIndep or MMTL, constituting 66% of all 2Detailed results for all 96 tests are given in the Appendix. F1 Model Tag Base Extending Hier Indep MTL Date I2B2’14 I2B2’06 0.899 *0.903 Onto *0.713 0.686 0.671 I2B2’06 Onto 0.641 *0.681 Onto I2B2’06 *0.834 0.807 Location Conll I2B2’14 *0.818 0.783 I2B2’06 *0.748 0.730 Onto *0.836 0.830 I2B2’14 Conll *0.954 0.899 0.887 Onto *0.951 0.921 0.907 I2B2’06 Conll 0.876 0.816 0.760 Onto *0.869 0.847 0.812 Onto Conll *0.747 0.701 0.703 I2B2’14 0.793 0.691 0.707 I2B2’06 *0.814 0.691 Name Conll I2B2’14 *0.855 0.690 I2B2’06 *0.827 0.666 0.631 Onto 0.860 0.841 I2B2’14 Conll *0.900 0.863 I2B2’06 *0.943 0.893 Onto *0.911 0.882 0.891 I2B2’06 Conll *0.662 0.653 Onto Conll *0.895 0.888 I2B2’14 *0.892 0.872 I2B2’06 *0.846 0.827 Table 5: F1 for tag-set extensions with more than 100 collisions. Blank entries indicate fewer than 100 collisions. (*) indicates all results that are statistically significantly better than others in that row. F1 Model Tag Base Extending Hier Indep MTL Location I2B2’14 I2B2’06 0.953 0.919 0.919 Onto 0.954 0.899 0.887 Name Conll I2B2’06 0.846 0.827 0.809 Onto 0.895 0.888 0.890 Table 6: Examples for performance differences when base datasets are extended with an in-domain dataset compared to an out-of-domain dataset. test triplets. In these tests, MHier is a clear winner, outperforming the compared models in all but two comparisons, often by a significant margin. Finally, we compared the models trained with selective annotation to an “upper-bound” of training and testing a single NER model on the same dataset with all tags annotated (Table 2). As expected, performance is usually lower with selective annotation. But, the drop intensifies when the base and extending datasets are from different domains – medical and news. In these cases, we observed that MHier is more robust. Its drop compared to combining datasets from the same domain is the least in almost all such combinations. Table 6 provides some illustrative examples. 6.2 Full tag-set integration experiment A scenario distinct from selective annotation is the integration of full tag-sets. On one hand, more training data is available for similar tags. On the other hand, more tags need to be consolidated among the tag-sets. 147 F1 Test Set Model I2B2’06 I2B2’14 Physio I2B2’06 *0.894 0.730 0.637 I2B2’14 0.714 *0.960 0.712 MConcat 0.827 0.809 0.621 MIndep 0.760 0.861 0.640 MMTL 0.81 0.862 *0.739 MHier *0.900 *0.958 *0.760 Collisions Test Set I2B2’06 I2B2’14 Physio MIndep 224 1272 114 MMTL 158 584 44 Table 7: F1 for combining I2B2’06 and I2B2’14. The top two models were trained only on a single dataset. The lower table part holds the number of collisions at post-processing. (*) indicates results that are statistically significantly better than others in that column. To test this scenario, we trained the tested model types on the training sets of I2B2’06 and I2B2’14, which have different tag-sets. The models were evaluated both on the test sets of these datasets and on Physio, an unseen test-set that requires the combination of the two training tag-sets for full coverage of its tag-set. We also compared the models to single models trained on each of the training sets alone. Table 7 displays the results. As expected, single models do well on the testset companion of their training-set but they underperform on the other test-sets. This is expected because the tag-set on which they were trained does not cover well the tag-sets in the other test-sets. When compared with the best-performing single model, using MConcat shows reduced results on all 3 test sets. This can be attributed to reduced performance for types that are semantically different between datasets (e.g. ‘Date’), while performance on similar tags (e.g. ‘Name’) does not drop. Combining the two training sets using either MIndep or MMTL leads to substantial performance drop in 5 out of 6 test-sets compared to the bestperforming single model. This is strongly correlated with the number of collisions encountered (see Table 7). Indeed, the only competitive result, MMTL tested on Physio, had less than 100 collisions. This demonstrates the non triviality in realworld tag-set integration, and the difficulty of resolving tagging decisions across tag-sets. By contrast, MHier has no performance drop compared to the single models trained and tested on the same dataset. Moreover, it is the best performing model on the unseen Physio test-set, with 6% relative improvement in F1 over the best single model. This experiment points up the robustness of the tag hierarchy approach when applied to this heterogeneous tag-set scenario. 7 Related Work Collobert et al. (2011) introduced the first competitive NN-based NER that required little or no feature engineering. Huang et al. (2015) combined LSTM with CRF, showing performance similar to non-NN models. Lample et al. (2016) extended this model with character-based embeddings in addition to word embedding, achieving state-of-theart results. Similar architectures, such as combinations of convolutional networks as replacements of RNNs were shown to out-perform previous NER models (Ma and Hovy, 2016; Chiu and Nichols, 2016; Strubell et al., 2017). Dernoncourt et al. (2017) and Liu et al. (2017) showed that the LSTM-CRF model achieves stateof-the-art results also for de-identification in the medical domain. Lee et al. (2018) demonstrated how performance drops significantly when the LSTM-CRF model is tested under transfer learning within the same domain in this task. Collobert and Weston (2008) introduced MTL for NN, and other works followed, showing it helps in various NLP tasks (Chen et al., 2016; Peng and Dredze, 2017). Søgaard and Goldberg (2016) and Hashimoto et al. (2017) argue that cascading architectures can improve MTL performance. Several works have explored conditions for successful application of MTL (Bingel and Søgaard, 2017; Bjerva, 2017; Alonso and Plank, 2017). Few works attempt to share information across datasets at the tagging level. Greenberg et al. (2018) proposed a single CRF model for tagging with heterogeneous tag-sets but without a hierarchy. They show the utility of this method for indomain datasets with a balanced tag distribution. Our model can be viewed as an extension of theirs for tag hierarchies. Augenstein et al. (2018) use tag embeddings in MTL to further propagate information between tasks. Li et al. (2017) propose to use a tag-set made of cross-product of two different POS tag-sets and train a model for it. Given the explosion in tag-set size, they introduce automatic pruning of cross-product tags. Kim et al. (2015) and Qu et al. (2016) automatically learn correlations between tag-sets, given training data for both tag-sets. They rely on similar contexts for related source and target tags, such as ‘professor’ and ‘student’. 148 Our tag-hierarchy model was inspired by recent work on hierarchical multi-label classification (Silla and Freitas, 2011; Zhang and Zhou, 2014), and can be viewed as an extension of this direction onto sequences tagging. 8 Conclusions We proposed a tag-hierarchy model for the heterogeneous tag-sets NER setting, which does not require a consolidation post-processing stage. In the conducted experiments, the proposed model consistently outperformed the baselines in difficult tagging cases and showed robustness when applying a single trained model to varied test sets. In the case of integrating datasets from the news and medical domains we found the blending task to be difficult. In future work, we’d like to improve this integration in order to gain from training on examples from different domains for tags like ‘Name’ and ‘Location’. Acknowledgments The authors would like to thank Yossi Matias, Katherine Chou, Greg Corrado, Avinatan Hassidim, Rony Amira, Itay Laish and Amit Markel for their help in creating this work. References Hector Martinez Alonso and Barbara Plank. 2017. When is multitask learning effective? semantic sequence prediction under varying data conditions. In EACL 2017-15th Conference of the European Chapter of the Association for Computational Linguistics, pages 1–10. Isabelle Augenstein, Sebastian Ruder, and Anders Søgaard. 2018. Multi-task learning of pairwise sequence classification tasks over disparate label spaces. arXiv:1802.09913v2. Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. In ACL. Johannes Bjerva. 2017. Will my auxiliary tagging task help? estimating auxiliary tasks effectivity in multi-task learning. In Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017, Gothenburg, Sweden, 131, pages 216–220. Link¨oping University Electronic Press. Hongshen Chen, Yue Zhang, and Qun Liu. 2016. Neural network for heterogeneous annotations. In EMNLP. Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. TACL, 4(1):357–370. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR, 12(Aug):2493–2537. Franck Dernoncourt, Ji Young Lee, Ozlem Uzuner, and Peter Szolovits. 2017. De-identification of patient notes with recurrent neural networks. J. Am Med Inform Assoc, 24(3):596–606. Chuanhai Dong, Huijia Wu, Jiajun Zhang, and Chengqing Zong. 2017. Multichannel lstm-crf for named entity recognition in chinese social media. In CCL/NLP-NABD. Springer. Ary L Goldberger, Luis AN Amaral, Leon Glass, Jeffrey M Hausdorff, Plamen Ch Ivanov, Roger G Mark, Joseph E Mietus, George B Moody, ChungKang Peng, and H Eugene Stanley. 2000. Physiobank, physiotoolkit, and physionet. Circulation, 101(23):215–220. Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In ICASSP. Nathan Greenberg, Trapit Bansal, Patrick Verga, and Andrew McCallum. 2018. Marginal likelihood training of bilstm-crf for biomedical named entity recognition from disjoint label sets. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2824–2829. Kazuma Hashimoto, Yoshimasa Tsuruoka, Richard Socher, et al. 2017. A joint many-task model: Growing a neural network for multiple nlp tasks. In EMNLP. Hangfeng He and Xu Sun. 2017. A unified model for cross-domain and semi-supervised named entity recognition in chinese social media. In AAAI. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv:1508.01991. Young-Bum Kim, Karl Stratos, Ruhi Sarikaya, and Minwoo Jeong. 2015. New transfer learning techniques for disparate label sets. In ACL. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML. 149 Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In ACL. Ji Young Lee, Franck Dernoncourt, and Peter Szolovits. 2018. Transfer learning for named-entity recognition with neural networks. Zhenghua Li, Jiayuan Chao, Min Zhang, Wenliang Chen, Meishan Zhang, Guohong Fu, Zhenghua Li, Jiayuan Chao, Min Zhang, Wenliang Chen, et al. 2017. Coupled pos tagging on heterogeneous annotations. TASLP, 25(3):557–571. Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramon Fernandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In EMNLP. Zengjian Liu, Buzhou Tang, Xiaolong Wang, and Qingcai Chen. 2017. De-identification of clinical notes via recurrent neural network and conditional random field. J. Biomed. Inf., 75:34–42. Rune B Lyngsø and Christian NS Pedersen. 2002. The consensus string problem and the complexity of comparing hidden markov models. Journal of Computer and System Sciences, 65(3):545–569. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In ACL. Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345–1359. Nanyun Peng and Mark Dredze. 2017. Multi-task domain adaptation for sequence tagging. In Proceedings of the 2nd Workshop on Representation Learning for NLP. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Lizhen Qu, Gabriela Ferraro, Liyuan Zhou, Weiwei Hou, and Timothy Baldwin. 2016. Named entity recognition for novel types by transfer learning. In EMNLP. Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv:1706.05098. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Benjamin Shickel, Patrick James Tighe, Azra Bihorac, and Parisa Rashidi. 2017. Deep ehr: A survey of recent advances in deep learning techniques for electronic health record (ehr) analysis. IEEE Journal of Biomedical and Health Informatics. Carlos N Silla and Alex A Freitas. 2011. A survey of hierarchical classification across different application domains. Data Mining and Knowledge Discovery, 22(1-2):31–72. Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In ACL. Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. In ENNLP. Amber Stubbs and ¨Ozlem Uzuner. 2015. Annotating longitudinal clinical narratives for de-identification: The 2014 i2b2/uthealth corpus. J. Biomed. Inf., 58:20–29. Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In NAACL. ¨Ozlem Uzuner, Yuan Luo, and Peter Szolovits. 2007. Evaluating the state-of-the-art in automatic deidentification. J. Am Med Inform Assoc, 14(5):550– 563. Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, and Ann Houston. 2013. Ontonotes release 5.0 ldc2013t19. LDC. Frank Wilcoxon. 1945. Individual comparisons by ranking methods. Biometrics bulletin, 1(6):80–83. Min-Ling Zhang and Zhi-Hua Zhou. 2014. A review on multi-label learning algorithms. IEEE transactions on knowledge and data engineering, 26(8):1819–1837. 150 A Experiment Results Full experiment results for Section 6.1 F1 Model Tag Base Extending Hier Indep MTL Date I2B2’14 I2B2’06 0.899 0.904 0.903 Onto 0.713 0.686 0.671 I2B2’06 I2B2’14 0.871 0.840 0.875 Onto 0.641 0.681 0.698 Onto I2B2’14 0.837 0.830 0.831 I2B2’06 0.834 0.826 0.807 Hospital I2B2’14 I2B2’06 0.931 0.941 0.918 I2B2’06 I2B2’14 0.867 0.866 0.853 Location Conll I2B2’14 0.818 0.783 0.812 I2B2’06 0.748 0.739 0.730 Onto 0.836 0.830 0.836 I2B2’14 Conll 0.954 0.899 0.887 I2B2’06 0.953 0.919 0.919 Onto 0.951 0.921 0.907 I2B2’06 Conll 0.876 0.816 0.760 I2B2’14 0.886 0.883 0.888 Onto 0.869 0.847 0.812 Onto Conll 0.747 0.701 0.703 I2B2’14 0.793 0.691 0.707 I2B2’06 0.814 0.691 0.666 Name Conll I2B2’14 0.855 0.771 0.690 I2B2’06 0.827 0.666 0.631 Onto 0.860 0.841 0.867 I2B2’14 Conll 0.900 0.863 0.890 I2B2’06 0.943 0.893 0.927 Onto 0.911 0.882 0.891 I2B2’06 Conll 0.662 0.679 0.653 I2B2’14 0.834 0.824 0.808 Onto 0.726 0.726 0.727 Onto Conll 0.895 0.888 0.890 I2B2’14 0.892 0.872 0.886 I2B2’06 0.846 0.827 0.809
2019
14
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1452–1461 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1452 Multi-Channel Graph Neural Network for Entity Alignment Yixin Cao1 Zhiyuan Liu2 Chengjiang Li3 Zhiyuan Liu3 Juanzi Li3 Tat-Seng Chua1 1School of Computing, National University of Singapore, Singapore 3School of Science, Xi’an Jiaotong University, Xi’an, China 3Department of CST, Tsinghua University, Beijing, China {caoyixin2011,acharkq,iamlockelightning}@gmail.com {liuzy,lijuanzi}@tsinghua.edu.cn, [email protected] Abstract Entity alignment typically suffers from the issues of structural heterogeneity and limited seed alignments. In this paper, we propose a novel Multi-channel Graph Neural Network model (MuGNN) to learn alignment-oriented knowledge graph (KG) embeddings by robustly encoding two KGs via multiple channels. Each channel encodes KGs via different relation weighting schemes with respect to self-attention towards KG completion and cross-KG attention for pruning exclusive entities respectively, which are further combined via pooling techniques. Moreover, we also infer and transfer rule knowledge for completing two KGs consistently. MuGNN is expected to reconcile the structural differences of two KGs, and thus make better use of seed alignments. Extensive experiments on five publicly available datasets demonstrate our superior performance (5% Hits@1 up on average). Source code and data used in the experiments can be accessed at https:// github.com/thunlp/MuGNN. 1 Introduction Knowledge Graphs (KGs) store the world knowledge in the form of directed graphs, where nodes denote entities and edges are their relations. Since it was proposed, many KGs are constructed (e.g., YAGO (Rebele et al., 2016)) to provide structural knowledge for different applications and languages. These KGs usually contain complementary contents, attracting researchers to integrate them into a unified KG, which shall benefit many knowledge driven tasks, such as information extraction (Cao et al., 2018a) and recommendation (Wang et al., 2018a). It is non-trivial to align different KGs due to their distinct surface forms, which makes the symbolic based methods (Suchanek et al., 2011) not Northeastern Mandarin Jilin City Province Dialect 吉林林省(Jilin) Province Dialect 刘⾮非(Liu Fei) Dialect Mayor KG1 KG2 东北北话(Northeastern Mandarin) 吉林林市 (Jilin City) Changchun ⻓长春 (Changchun) Nearby Capital Capital Jilin Figure 1: Illustration of the structural differences (dashed lines and ellipse) between different KGs. always effective. Instead, recent work utilizes general KG embedding methods (e.g., TransE (Bordes et al., 2013)) and align equivalent entities into a unified vector space based on a few seed alignments (Chen et al., 2017; Sun et al., 2017; Zhu et al., 2017; Chen et al., 2018; Sun et al., 2018; Wang et al., 2018b). The assumption is that entities and their counterparts in different KGs should have similar structures and thus similar embeddings. However, alignment performance is unsatisfactory mainly due to the following challenges: Heterogeneity of Structures Different KGs usually differ a lot, and may mislead the representation learning and the alignment information from seeds. Take the entity Jilin City as an example (Figure 1), KG1 and KG2 present its subgraphs derived from English and Chinese Wikipedia, respectively. Since it is a Chinese city, KG2 is more informative than KG1 (denoted by dashed lines and ellipse), such as the relations of Dialect and Nearby, and the entity Liu Fei through relation Mayor. Clearly, the province Jilin in KG1 and Jilin City in KG2, which are incorrect alignment, are more probable close in the vector space, because they have more similar structures (e.g., 1453 Northeastern Mandarin and Changchun). What’s worse, this incorrect alignment shall spread further over the graph. Limited Seed Alignments Recent efforts based on general embedding methods heavily rely on existing alignments as training data, while seed alignments are usually insufficient (Chen et al., 2017) for high-quality entity embeddings. Wang et al. (2018b) introduces Graph Convolution Network (GCN) (Kipf and Welling, 2017) to enhance the entity embeddings by modeling structural features, but fails to consider structural heterogeneity. To address the issues, we propose to perform KG inference and alignment jointly to explicitly reconcile the structural difference between different KGs, and utilize a graph-based model to make better use of seed alignment information. The basic idea of structural reconciliation is to complete missing relations and prune exclusive entities. As shown in Figure 1, to reconcile the differences of Jilin City, it is necessary to complete the missing relations Dialect and Nearby in KG1, and filter out entity Liu Fei exclusive in KG2. The asymmetric entities and relations are caused not only by the incompleteness nature of KG, but also from their different demands. In this paper, we propose a novel Multi-channel Graph Neural Network model MuGNN, which can encode different KGs to learn alignmentoriented embeddings. For each KG, MuGNN utilizes different channels towards KG completion and pruning, so as to reconcile two types of structural differences: missing relations and exclusive entities. Different channels are combined via pooling techniques, thus entity embeddings are enhanced with reconciled structures from different perspectives, making utilization of seed alignments effectively and efficiently. Between KGs, each channel transfers structure knowledge via shared parameters. Specifically, for KG completion, we first employ AMIE+ (Gal´arraga et al., 2015) on each KG to induce rules, then transfer them between KGs towards consistent completion. Following Graph Attention Network (GAT) (Velickovic et al., 2018), we utilize KG self-attention to weighting relations for GNN channels. For KG pruning, we design cross-KG attention to filter out exclusive entities by assigning low weights to corresponding relations. We summarize the main contributions as follows: • We propose a novel Multi-channel GNN model MuGNN that learns alignmentoriented embeddings by encoding graphs from different perspectives: completion and pruning, so as to be robust to structural differences. • We propose to perform KG inference and alignment jointly, so that the heterogeneity of KGs are explicitly reconciled through completion by rule inference and transfer, and pruning via cross-KG attention. • We perform extensive experiments on five publicly available datasets for entity alignment tasks, and achieve significant improvements of 5% Hits@1 on average. Further ablation study demonstrates the effectiveness of our key components. 2 Preliminaries and Framework 2.1 Preliminaries KG is a directed graph G = (E, R, T) involving a set of entities E, relation types R, and triplets T. Each triplet t = (ei, rij, ej) ∈T denotes that head entity ei is related to tail entity ej through relation rij ∈R. Rule knowledge K = {k} can be induced from KG, e.g., in the form of ∀x, y ∈E : (x, rs, y) ⇒ (x, rc, y), stating that two entities might be related through rc if they are related through rs. The left side of the arrow is defined as premise, and the right side is a conclusion. We denote rule as k = (rc|rs1, · · · , rsp) consisting of one or multiple |p| premises and only one conclusion. Rule Grounding is to find suitable triplets satisfying the premise-conclusion relationship defined by rules. For rule k, we denote one of its grounds as g(k) = (tc|ts1, · · · , tsp) including |p| + 1 triplets. The triplets satisfies: ts1 ∧· · · ∧tsp ⇒ tc, where ∧is the logical conjunction that plays a similar role as ‘and’. Other compositions include disjunction ∨(similar to ‘or’) and negation ¬ (similar to ‘not’). For example, given a rule bornIn(x, y) ∧cityOf(y, z) ⇒nationality(x, z), we ground it in a KG, and obtain : bornIn(Obama, Hawaii) ∧cityOf(Hawaii, United States) ⇒nationality(Obama, United States). We use G(k) = {g(k)} to denote all groundings of rule k. Entity alignment takes two heterogeneous KGs G and G′ = (E′, R′, T ′) as input, the goal is 1454 0.2 0.6 0.4 0.6 0.0 0.2 0.4 r0 1 <latexit sha1_base64="4m8j fNkQ4Lfet67GqouKcOIn5Q=">ACx3icjVHLSsNAFD2Nr1pfV ZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/ 8M4BbWITkhy5tx7zsy9109CnkrHec1ZC4tLyv51cLa+sbmVnF 7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8j i6kZOEtUdeP+I9HnhSUeKw43aKJafs6GXPA9eAEsyqxsUX3KLG AEyjMAQRIO4SGlpwUXDhLi2pgSJwhxHWe4R4G0GWUxyvCIHdK3 T7uWYSPaK89UqwM6JaRXkNLGAWliyhOE1Wm2jmfaWbG/eU+1p7r bhP6+8RoRKzEg9i/dLPO/OlWLRA9nugZONSWaUdUFxiXTXVE3t7 9UJckhIU7hLsUF4UArZ32tSbVtavejr+pjMVq/aByc3wrm5JA3 Z/jnMe1I/LrlN2r09KlXMz6jz2sI8jmucpKrhEFTXyHuART3i2r qzYGlt3n6lWzmh28W1ZDx+C45BM</latexit> <latexit sha1_base64="4m8j fNkQ4Lfet67GqouKcOIn5Q=">ACx3icjVHLSsNAFD2Nr1pfV ZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/ 8M4BbWITkhy5tx7zsy9109CnkrHec1ZC4tLyv51cLa+sbmVnF 7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8j i6kZOEtUdeP+I9HnhSUeKw43aKJafs6GXPA9eAEsyqxsUX3KLG AEyjMAQRIO4SGlpwUXDhLi2pgSJwhxHWe4R4G0GWUxyvCIHdK3 T7uWYSPaK89UqwM6JaRXkNLGAWliyhOE1Wm2jmfaWbG/eU+1p7r bhP6+8RoRKzEg9i/dLPO/OlWLRA9nugZONSWaUdUFxiXTXVE3t7 9UJckhIU7hLsUF4UArZ32tSbVtavejr+pjMVq/aByc3wrm5JA3 Z/jnMe1I/LrlN2r09KlXMz6jz2sI8jmucpKrhEFTXyHuART3i2r qzYGlt3n6lWzmh28W1ZDx+C45BM</latexit> <latexit sha1_base64="4m8j fNkQ4Lfet67GqouKcOIn5Q=">ACx3icjVHLSsNAFD2Nr1pfV ZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/ 8M4BbWITkhy5tx7zsy9109CnkrHec1ZC4tLyv51cLa+sbmVnF 7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8j i6kZOEtUdeP+I9HnhSUeKw43aKJafs6GXPA9eAEsyqxsUX3KLG AEyjMAQRIO4SGlpwUXDhLi2pgSJwhxHWe4R4G0GWUxyvCIHdK3 T7uWYSPaK89UqwM6JaRXkNLGAWliyhOE1Wm2jmfaWbG/eU+1p7r bhP6+8RoRKzEg9i/dLPO/OlWLRA9nugZONSWaUdUFxiXTXVE3t7 9UJckhIU7hLsUF4UArZ32tSbVtavejr+pjMVq/aByc3wrm5JA3 Z/jnMe1I/LrlN2r09KlXMz6jz2sI8jmucpKrhEFTXyHuART3i2r qzYGlt3n6lWzmh28W1ZDx+C45BM</latexit> <latexit sha1_base64="4m8j fNkQ4Lfet67GqouKcOIn5Q=">ACx3icjVHLSsNAFD2Nr1pfV ZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/ 8M4BbWITkhy5tx7zsy9109CnkrHec1ZC4tLyv51cLa+sbmVnF 7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8j i6kZOEtUdeP+I9HnhSUeKw43aKJafs6GXPA9eAEsyqxsUX3KLG AEyjMAQRIO4SGlpwUXDhLi2pgSJwhxHWe4R4G0GWUxyvCIHdK3 T7uWYSPaK89UqwM6JaRXkNLGAWliyhOE1Wm2jmfaWbG/eU+1p7r bhP6+8RoRKzEg9i/dLPO/OlWLRA9nugZONSWaUdUFxiXTXVE3t7 9UJckhIU7hLsUF4UArZ32tSbVtavejr+pjMVq/aByc3wrm5JA3 Z/jnMe1I/LrlN2r09KlXMz6jz2sI8jmucpKrhEFTXyHuART3i2r qzYGlt3n6lWzmh28W1ZDx+C45BM</latexit> r0 2 <latexit sha1_base64="FOAVHKzPyW6Y8aTzoMcI/BxlUVU=">A ACx3icjVHLSsNAFD2Nr1pfVZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/8M4BbWITkhy5tx7zsy9109CnkrHec1 ZC4tLyv51cLa+sbmVnF7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8ji6kZOEtUdeP+I9HnhSUeKwc9wplpyo5c9D 1wDSjCrGhdfcIsuYgTIMAJDBEk4hIeUnhZcOEiIa2NKnCDEdZzhHgXSZpTFKMjdkjfPu1aho1orzxTrQ7olJBeQUobB6SJKU8QVqfZOp5 pZ8X+5j3VnupuE/r7xmtErMSA2L90s8z/6lQtEj2c6Ro41ZRoRlUXGJdMd0Xd3P5SlSHhDiFuxQXhAOtnPXZ1pU1656+n4m85UrNoH JjfDu7olDdj9Oc5UD8u07ZvT4pVc7NqPYwz6OaJ6nqOASVdTIe4BHPOHZurJia2zdfaZaOaPZxbdlPXwAhUOQTQ=</latexit> <latexit sha1_base64="FOAVHKzPyW6Y8aTzoMcI/BxlUVU=">A ACx3icjVHLSsNAFD2Nr1pfVZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/8M4BbWITkhy5tx7zsy9109CnkrHec1 ZC4tLyv51cLa+sbmVnF7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8ji6kZOEtUdeP+I9HnhSUeKwc9wplpyo5c9D 1wDSjCrGhdfcIsuYgTIMAJDBEk4hIeUnhZcOEiIa2NKnCDEdZzhHgXSZpTFKMjdkjfPu1aho1orzxTrQ7olJBeQUobB6SJKU8QVqfZOp5 pZ8X+5j3VnupuE/r7xmtErMSA2L90s8z/6lQtEj2c6Ro41ZRoRlUXGJdMd0Xd3P5SlSHhDiFuxQXhAOtnPXZ1pU1656+n4m85UrNoH JjfDu7olDdj9Oc5UD8u07ZvT4pVc7NqPYwz6OaJ6nqOASVdTIe4BHPOHZurJia2zdfaZaOaPZxbdlPXwAhUOQTQ=</latexit> <latexit sha1_base64="FOAVHKzPyW6Y8aTzoMcI/BxlUVU=">A ACx3icjVHLSsNAFD2Nr1pfVZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/8M4BbWITkhy5tx7zsy9109CnkrHec1 ZC4tLyv51cLa+sbmVnF7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8ji6kZOEtUdeP+I9HnhSUeKwc9wplpyo5c9D 1wDSjCrGhdfcIsuYgTIMAJDBEk4hIeUnhZcOEiIa2NKnCDEdZzhHgXSZpTFKMjdkjfPu1aho1orzxTrQ7olJBeQUobB6SJKU8QVqfZOp5 pZ8X+5j3VnupuE/r7xmtErMSA2L90s8z/6lQtEj2c6Ro41ZRoRlUXGJdMd0Xd3P5SlSHhDiFuxQXhAOtnPXZ1pU1656+n4m85UrNoH JjfDu7olDdj9Oc5UD8u07ZvT4pVc7NqPYwz6OaJ6nqOASVdTIe4BHPOHZurJia2zdfaZaOaPZxbdlPXwAhUOQTQ=</latexit> <latexit sha1_base64="FOAVHKzPyW6Y8aTzoMcI/BxlUVU=">A ACx3icjVHLSsNAFD2Nr1pfVZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/8M4BbWITkhy5tx7zsy9109CnkrHec1 ZC4tLyv51cLa+sbmVnF7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8ji6kZOEtUdeP+I9HnhSUeKwc9wplpyo5c9D 1wDSjCrGhdfcIsuYgTIMAJDBEk4hIeUnhZcOEiIa2NKnCDEdZzhHgXSZpTFKMjdkjfPu1aho1orzxTrQ7olJBeQUobB6SJKU8QVqfZOp5 pZ8X+5j3VnupuE/r7xmtErMSA2L90s8z/6lQtEj2c6Ro41ZRoRlUXGJdMd0Xd3P5SlSHhDiFuxQXhAOtnPXZ1pU1656+n4m85UrNoH JjfDu7olDdj9Oc5UD8u07ZvT4pVc7NqPYwz6OaJ6nqOASVdTIe4BHPOHZurJia2zdfaZaOaPZxbdlPXwAhUOQTQ=</latexit> r2 <latexit sha1_base64="qRhDd/iqodzt8wW6aLj/F0HCtZc=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LopsuKthVqKcl0WkPzYjJRShH8Abf6aeIf6F94Z5yCWkQnJDlz7j1n5t7rp2GQScd5LVg Li0vLK8XV0tr6xuZWeXunSW5YLzFkjARV76X8TCIeUsGMuRXqeBe5Ie84/PVLxzy0UWJPGlnKS8F3mjOBgGzJNEXYh+rV+uOFVHL3seu AZUYFYzKb/gGgMkYMgRgSOGJBzCQ0ZPFy4cpMT1MCVOEAp0nOMeJdLmlMUpwyN2TN8R7bqGjWmvPDOtZnRKSK8gpY0D0iSUJwir02wdz7W zYn/znmpPdbcJ/X3jFRErcUPsX7pZ5n91qhaJIU50DQHVlGpGVceMS67om5uf6lKkNKnMIDigvCTCtnfba1JtO1q956Ov6mMxWr9szk 5nhXt6QBuz/HOQ/atarVN3zo0r91Iy6iD3s45DmeYw6GmiRd4jPOIJz1bDiq3cuvtMtQpGs4tvy3r4AIfkBw=</latexit> <latexit sha1_base64="qRhDd/iqodzt8wW6aLj/F0HCtZc=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LopsuKthVqKcl0WkPzYjJRShH8Abf6aeIf6F94Z5yCWkQnJDlz7j1n5t7rp2GQScd5LVg Li0vLK8XV0tr6xuZWeXunSW5YLzFkjARV76X8TCIeUsGMuRXqeBe5Ie84/PVLxzy0UWJPGlnKS8F3mjOBgGzJNEXYh+rV+uOFVHL3seu AZUYFYzKb/gGgMkYMgRgSOGJBzCQ0ZPFy4cpMT1MCVOEAp0nOMeJdLmlMUpwyN2TN8R7bqGjWmvPDOtZnRKSK8gpY0D0iSUJwir02wdz7W zYn/znmpPdbcJ/X3jFRErcUPsX7pZ5n91qhaJIU50DQHVlGpGVceMS67om5uf6lKkNKnMIDigvCTCtnfba1JtO1q956Ov6mMxWr9szk 5nhXt6QBuz/HOQ/atarVN3zo0r91Iy6iD3s45DmeYw6GmiRd4jPOIJz1bDiq3cuvtMtQpGs4tvy3r4AIfkBw=</latexit> <latexit sha1_base64="qRhDd/iqodzt8wW6aLj/F0HCtZc=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LopsuKthVqKcl0WkPzYjJRShH8Abf6aeIf6F94Z5yCWkQnJDlz7j1n5t7rp2GQScd5LVg Li0vLK8XV0tr6xuZWeXunSW5YLzFkjARV76X8TCIeUsGMuRXqeBe5Ie84/PVLxzy0UWJPGlnKS8F3mjOBgGzJNEXYh+rV+uOFVHL3seu AZUYFYzKb/gGgMkYMgRgSOGJBzCQ0ZPFy4cpMT1MCVOEAp0nOMeJdLmlMUpwyN2TN8R7bqGjWmvPDOtZnRKSK8gpY0D0iSUJwir02wdz7W zYn/znmpPdbcJ/X3jFRErcUPsX7pZ5n91qhaJIU50DQHVlGpGVceMS67om5uf6lKkNKnMIDigvCTCtnfba1JtO1q956Ov6mMxWr9szk 5nhXt6QBuz/HOQ/atarVN3zo0r91Iy6iD3s45DmeYw6GmiRd4jPOIJz1bDiq3cuvtMtQpGs4tvy3r4AIfkBw=</latexit> <latexit sha1_base64="qRhDd/iqodzt8wW6aLj/F0HCtZc=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LopsuKthVqKcl0WkPzYjJRShH8Abf6aeIf6F94Z5yCWkQnJDlz7j1n5t7rp2GQScd5LVg Li0vLK8XV0tr6xuZWeXunSW5YLzFkjARV76X8TCIeUsGMuRXqeBe5Ie84/PVLxzy0UWJPGlnKS8F3mjOBgGzJNEXYh+rV+uOFVHL3seu AZUYFYzKb/gGgMkYMgRgSOGJBzCQ0ZPFy4cpMT1MCVOEAp0nOMeJdLmlMUpwyN2TN8R7bqGjWmvPDOtZnRKSK8gpY0D0iSUJwir02wdz7W zYn/znmpPdbcJ/X3jFRErcUPsX7pZ5n91qhaJIU50DQHVlGpGVceMS67om5uf6lKkNKnMIDigvCTCtnfba1JtO1q956Ov6mMxWr9szk 5nhXt6QBuz/HOQ/atarVN3zo0r91Iy6iD3s45DmeYw6GmiRd4jPOIJz1bDiq3cuvtMtQpGs4tvy3r4AIfkBw=</latexit> r1 <latexit sha1_base64="+HKr fPoENaBXTiFDqPoRvODNtH8=">ACxnicjVHLSsNAFD2Nr1pfV ZdugkVwVRIRdFl02VF+4BaSjKd1qF5MZkopQj+gFv9NPEP9C+8 M6agFtEJSc6ce8+Zuf6SBS5TivBWthcWl5pbhaWlvf2Nwqb+ 0jiTjDdZHMSy43spD0TEm0qogHcSyb3QD3jbH5/rePuWy1TE0Z WaJLwXeqNIDAXzFGXsu/2yxWn6phlzwM3BxXkqxGX3CNAWIwZ AjBEUERDuAhpacLFw4S4nqYEicJCRPnuEeJtBlcrwiB3Td0S7 bs5GtNeqVEzOiWgV5LSxgFpYsqThPVptolnxlmzv3lPjae+24T +fu4VEqtwQ+xfulnmf3W6FoUhTk0NgmpKDKOrY7lLZrqib25/qU qRQ0KcxgOKS8LMKGd9to0mNbXr3nom/mYyNav3LM/N8K5vSQN2f 45zHrSOq5TdS+OK7WzfNRF7GEfhzTPE9RQRwN8h7hEU94tupW ZGXW3WeqVcg1u/i2rIcP/7CQGw=</latexit> <latexit sha1_base64="+HKr fPoENaBXTiFDqPoRvODNtH8=">ACxnicjVHLSsNAFD2Nr1pfV ZdugkVwVRIRdFl02VF+4BaSjKd1qF5MZkopQj+gFv9NPEP9C+8 M6agFtEJSc6ce8+Zuf6SBS5TivBWthcWl5pbhaWlvf2Nwqb+ 0jiTjDdZHMSy43spD0TEm0qogHcSyb3QD3jbH5/rePuWy1TE0Z WaJLwXeqNIDAXzFGXsu/2yxWn6phlzwM3BxXkqxGX3CNAWIwZ AjBEUERDuAhpacLFw4S4nqYEicJCRPnuEeJtBlcrwiB3Td0S7 bs5GtNeqVEzOiWgV5LSxgFpYsqThPVptolnxlmzv3lPjae+24T +fu4VEqtwQ+xfulnmf3W6FoUhTk0NgmpKDKOrY7lLZrqib25/qU qRQ0KcxgOKS8LMKGd9to0mNbXr3nom/mYyNav3LM/N8K5vSQN2f 45zHrSOq5TdS+OK7WzfNRF7GEfhzTPE9RQRwN8h7hEU94tupW ZGXW3WeqVcg1u/i2rIcP/7CQGw=</latexit> <latexit sha1_base64="+HKr fPoENaBXTiFDqPoRvODNtH8=">ACxnicjVHLSsNAFD2Nr1pfV ZdugkVwVRIRdFl02VF+4BaSjKd1qF5MZkopQj+gFv9NPEP9C+8 M6agFtEJSc6ce8+Zuf6SBS5TivBWthcWl5pbhaWlvf2Nwqb+ 0jiTjDdZHMSy43spD0TEm0qogHcSyb3QD3jbH5/rePuWy1TE0Z WaJLwXeqNIDAXzFGXsu/2yxWn6phlzwM3BxXkqxGX3CNAWIwZ AjBEUERDuAhpacLFw4S4nqYEicJCRPnuEeJtBlcrwiB3Td0S7 bs5GtNeqVEzOiWgV5LSxgFpYsqThPVptolnxlmzv3lPjae+24T +fu4VEqtwQ+xfulnmf3W6FoUhTk0NgmpKDKOrY7lLZrqib25/qU qRQ0KcxgOKS8LMKGd9to0mNbXr3nom/mYyNav3LM/N8K5vSQN2f 45zHrSOq5TdS+OK7WzfNRF7GEfhzTPE9RQRwN8h7hEU94tupW ZGXW3WeqVcg1u/i2rIcP/7CQGw=</latexit> <latexit sha1_base64="+HKr fPoENaBXTiFDqPoRvODNtH8=">ACxnicjVHLSsNAFD2Nr1pfV ZdugkVwVRIRdFl02VF+4BaSjKd1qF5MZkopQj+gFv9NPEP9C+8 M6agFtEJSc6ce8+Zuf6SBS5TivBWthcWl5pbhaWlvf2Nwqb+ 0jiTjDdZHMSy43spD0TEm0qogHcSyb3QD3jbH5/rePuWy1TE0Z WaJLwXeqNIDAXzFGXsu/2yxWn6phlzwM3BxXkqxGX3CNAWIwZ AjBEUERDuAhpacLFw4S4nqYEicJCRPnuEeJtBlcrwiB3Td0S7 bs5GtNeqVEzOiWgV5LSxgFpYsqThPVptolnxlmzv3lPjae+24T +fu4VEqtwQ+xfulnmf3W6FoUhTk0NgmpKDKOrY7lLZrqib25/qU qRQ0KcxgOKS8LMKGd9to0mNbXr3nom/mYyNav3LM/N8K5vSQN2f 45zHrSOq5TdS+OK7WzfNRF7GEfhzTPE9RQRwN8h7hEU94tupW ZGXW3WeqVcg1u/i2rIcP/7CQGw=</latexit> r3 <latexit sha1_base64="PVjo9 EN0aS8UfmF/1w4Hq/+JH/8=">ACxnicjVHLSsNAFD2Nr1pfVZdu gkVwVRIVdFl02VF+4BaSjKd1sG8mEyUgR/wK1+mvgH+hfeGVNQi +iEJGfOvefM3Hv9JBCpcpzXgjU3v7C4VFwurayurW+UN7daZxJxp sDmLZ8b2UByLiTSVUwDuJ5F7oB7zt35zpePuWy1TE0aUaJ7wXeq NIDAXzFEXsn/YL1ecqmOWPQvcHFSQr0ZcfsEVBojBkCERwRFOIC HlJ4uXDhIiOthQpwkJEyc4x4l0maUxSnDI/aGviPadXM2or32TI2a 0SkBvZKUNvZIE1OeJKxPs08M86a/c17Yjz13cb093OvkFiFa2L/0 k0z/6vTtSgMcWJqEFRTYhdHctdMtMVfXP7S1WKHBLiNB5QXBJmRj nts20qald9Yz8TeTqVm9Z3luhnd9Sxqw+3Ocs6B1UHWdqnt+VKm d5qMuYge72Kd5HqOGOhpokvcIj3jCs1W3Iiuz7j5TrUKu2ca3ZT18 AR/kB0=</latexit> <latexit sha1_base64="PVjo9 EN0aS8UfmF/1w4Hq/+JH/8=">ACxnicjVHLSsNAFD2Nr1pfVZdu gkVwVRIVdFl02VF+4BaSjKd1sG8mEyUgR/wK1+mvgH+hfeGVNQi +iEJGfOvefM3Hv9JBCpcpzXgjU3v7C4VFwurayurW+UN7daZxJxp sDmLZ8b2UByLiTSVUwDuJ5F7oB7zt35zpePuWy1TE0aUaJ7wXeq NIDAXzFEXsn/YL1ecqmOWPQvcHFSQr0ZcfsEVBojBkCERwRFOIC HlJ4uXDhIiOthQpwkJEyc4x4l0maUxSnDI/aGviPadXM2or32TI2a 0SkBvZKUNvZIE1OeJKxPs08M86a/c17Yjz13cb093OvkFiFa2L/0 k0z/6vTtSgMcWJqEFRTYhdHctdMtMVfXP7S1WKHBLiNB5QXBJmRj nts20qald9Yz8TeTqVm9Z3luhnd9Sxqw+3Ocs6B1UHWdqnt+VKm d5qMuYge72Kd5HqOGOhpokvcIj3jCs1W3Iiuz7j5TrUKu2ca3ZT18 AR/kB0=</latexit> <latexit sha1_base64="PVjo9 EN0aS8UfmF/1w4Hq/+JH/8=">ACxnicjVHLSsNAFD2Nr1pfVZdu gkVwVRIVdFl02VF+4BaSjKd1sG8mEyUgR/wK1+mvgH+hfeGVNQi +iEJGfOvefM3Hv9JBCpcpzXgjU3v7C4VFwurayurW+UN7daZxJxp sDmLZ8b2UByLiTSVUwDuJ5F7oB7zt35zpePuWy1TE0aUaJ7wXeq NIDAXzFEXsn/YL1ecqmOWPQvcHFSQr0ZcfsEVBojBkCERwRFOIC HlJ4uXDhIiOthQpwkJEyc4x4l0maUxSnDI/aGviPadXM2or32TI2a 0SkBvZKUNvZIE1OeJKxPs08M86a/c17Yjz13cb093OvkFiFa2L/0 k0z/6vTtSgMcWJqEFRTYhdHctdMtMVfXP7S1WKHBLiNB5QXBJmRj nts20qald9Yz8TeTqVm9Z3luhnd9Sxqw+3Ocs6B1UHWdqnt+VKm d5qMuYge72Kd5HqOGOhpokvcIj3jCs1W3Iiuz7j5TrUKu2ca3ZT18 AR/kB0=</latexit> <latexit sha1_base64="PVjo9 EN0aS8UfmF/1w4Hq/+JH/8=">ACxnicjVHLSsNAFD2Nr1pfVZdu gkVwVRIVdFl02VF+4BaSjKd1sG8mEyUgR/wK1+mvgH+hfeGVNQi +iEJGfOvefM3Hv9JBCpcpzXgjU3v7C4VFwurayurW+UN7daZxJxp sDmLZ8b2UByLiTSVUwDuJ5F7oB7zt35zpePuWy1TE0aUaJ7wXeq NIDAXzFEXsn/YL1ecqmOWPQvcHFSQr0ZcfsEVBojBkCERwRFOIC HlJ4uXDhIiOthQpwkJEyc4x4l0maUxSnDI/aGviPadXM2or32TI2a 0SkBvZKUNvZIE1OeJKxPs08M86a/c17Yjz13cb093OvkFiFa2L/0 k0z/6vTtSgMcWJqEFRTYhdHctdMtMVfXP7S1WKHBLiNB5QXBJmRj nts20qald9Yz8TeTqVm9Z3luhnd9Sxqw+3Ocs6B1UHWdqnt+VKm d5qMuYge72Kd5HqOGOhpokvcIj3jCs1W3Iiuz7j5TrUKu2ca3ZT18 AR/kB0=</latexit> r4 <latexit sha1_base64="Rb73XpS40RWqvcrK/Dm2IXSZ2A=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIp6LopsuKthVqKcl0WofmxWSilCL4A27108Q/0L/wzpiCWkQnJDlz7j1n5t7rJ4FIleO8Fqy FxaXleJqaW19Y3OrvL3TuNMt5icRDLK9LeSAi3lJCBfwqkdwL/YB3/PGZjnduUxFHF2qScJ7oTeKxFAwTxF1Ifu1frniVB2z7Hng5 qCfDXj8guMUAMhgwhOCIowgE8pPR04cJBQlwPU+IkIWHiHPcokTajLE4ZHrFj+o5o183ZiPbaMzVqRqcE9EpS2jgTUx5krA+zTbxzDh r9jfvqfHUd5vQ38+9QmIVboj9SzfL/K9O16IwxImpQVBNiWF0dSx3yUxX9M3tL1UpckiI03hAcUmYGeWsz7bRpKZ23VvPxN9Mpmb1nuW5 Gd71LWnA7s9xzoP2UdV1qu5rVI/zUdxB72cUjzPEYdDTRIu8RHvGEZ6thRVZm3X2mWoVcs4tvy3r4AbfkB4=</latexit> <latexit sha1_base64="Rb73XpS40RWqvcrK/Dm2IXSZ2A=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIp6LopsuKthVqKcl0WofmxWSilCL4A27108Q/0L/wzpiCWkQnJDlz7j1n5t7rJ4FIleO8Fqy FxaXleJqaW19Y3OrvL3TuNMt5icRDLK9LeSAi3lJCBfwqkdwL/YB3/PGZjnduUxFHF2qScJ7oTeKxFAwTxF1Ifu1frniVB2z7Hng5 qCfDXj8guMUAMhgwhOCIowgE8pPR04cJBQlwPU+IkIWHiHPcokTajLE4ZHrFj+o5o183ZiPbaMzVqRqcE9EpS2jgTUx5krA+zTbxzDh r9jfvqfHUd5vQ38+9QmIVboj9SzfL/K9O16IwxImpQVBNiWF0dSx3yUxX9M3tL1UpckiI03hAcUmYGeWsz7bRpKZ23VvPxN9Mpmb1nuW5 Gd71LWnA7s9xzoP2UdV1qu5rVI/zUdxB72cUjzPEYdDTRIu8RHvGEZ6thRVZm3X2mWoVcs4tvy3r4AbfkB4=</latexit> <latexit sha1_base64="Rb73XpS40RWqvcrK/Dm2IXSZ2A=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIp6LopsuKthVqKcl0WofmxWSilCL4A27108Q/0L/wzpiCWkQnJDlz7j1n5t7rJ4FIleO8Fqy FxaXleJqaW19Y3OrvL3TuNMt5icRDLK9LeSAi3lJCBfwqkdwL/YB3/PGZjnduUxFHF2qScJ7oTeKxFAwTxF1Ifu1frniVB2z7Hng5 qCfDXj8guMUAMhgwhOCIowgE8pPR04cJBQlwPU+IkIWHiHPcokTajLE4ZHrFj+o5o183ZiPbaMzVqRqcE9EpS2jgTUx5krA+zTbxzDh r9jfvqfHUd5vQ38+9QmIVboj9SzfL/K9O16IwxImpQVBNiWF0dSx3yUxX9M3tL1UpckiI03hAcUmYGeWsz7bRpKZ23VvPxN9Mpmb1nuW5 Gd71LWnA7s9xzoP2UdV1qu5rVI/zUdxB72cUjzPEYdDTRIu8RHvGEZ6thRVZm3X2mWoVcs4tvy3r4AbfkB4=</latexit> <latexit sha1_base64="Rb73XpS40RWqvcrK/Dm2IXSZ2A=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIp6LopsuKthVqKcl0WofmxWSilCL4A27108Q/0L/wzpiCWkQnJDlz7j1n5t7rJ4FIleO8Fqy FxaXleJqaW19Y3OrvL3TuNMt5icRDLK9LeSAi3lJCBfwqkdwL/YB3/PGZjnduUxFHF2qScJ7oTeKxFAwTxF1Ifu1frniVB2z7Hng5 qCfDXj8guMUAMhgwhOCIowgE8pPR04cJBQlwPU+IkIWHiHPcokTajLE4ZHrFj+o5o183ZiPbaMzVqRqcE9EpS2jgTUx5krA+zTbxzDh r9jfvqfHUd5vQ38+9QmIVboj9SzfL/K9O16IwxImpQVBNiWF0dSx3yUxX9M3tL1UpckiI03hAcUmYGeWsz7bRpKZ23VvPxN9Mpmb1nuW5 Gd71LWnA7s9xzoP2UdV1qu5rVI/zUdxB72cUjzPEYdDTRIu8RHvGEZ6thRVZm3X2mWoVcs4tvy3r4AbfkB4=</latexit> KG Completion r2 <latexit sha1_base64="qRhDd/iqodzt8wW6aLj/F0HCtZc=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LopsuKthVqKcl0WkPzYjJRShH8Abf6aeIf6F94Z5yCWkQnJDlz7j1n5t7rp2GQScd5LVg Li0vLK8XV0tr6xuZWeXunSW5YLzFkjARV76X8TCIeUsGMuRXqeBe5Ie84/PVLxzy0UWJPGlnKS8F3mjOBgGzJNEXYh+rV+uOFVHL3seu AZUYFYzKb/gGgMkYMgRgSOGJBzCQ0ZPFy4cpMT1MCVOEAp0nOMeJdLmlMUpwyN2TN8R7bqGjWmvPDOtZnRKSK8gpY0D0iSUJwir02wdz7W zYn/znmpPdbcJ/X3jFRErcUPsX7pZ5n91qhaJIU50DQHVlGpGVceMS67om5uf6lKkNKnMIDigvCTCtnfba1JtO1q956Ov6mMxWr9szk 5nhXt6QBuz/HOQ/atarVN3zo0r91Iy6iD3s45DmeYw6GmiRd4jPOIJz1bDiq3cuvtMtQpGs4tvy3r4AIfkBw=</latexit> <latexit sha1_base64="qRhDd/iqodzt8wW6aLj/F0HCtZc=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LopsuKthVqKcl0WkPzYjJRShH8Abf6aeIf6F94Z5yCWkQnJDlz7j1n5t7rp2GQScd5LVg Li0vLK8XV0tr6xuZWeXunSW5YLzFkjARV76X8TCIeUsGMuRXqeBe5Ie84/PVLxzy0UWJPGlnKS8F3mjOBgGzJNEXYh+rV+uOFVHL3seu AZUYFYzKb/gGgMkYMgRgSOGJBzCQ0ZPFy4cpMT1MCVOEAp0nOMeJdLmlMUpwyN2TN8R7bqGjWmvPDOtZnRKSK8gpY0D0iSUJwir02wdz7W zYn/znmpPdbcJ/X3jFRErcUPsX7pZ5n91qhaJIU50DQHVlGpGVceMS67om5uf6lKkNKnMIDigvCTCtnfba1JtO1q956Ov6mMxWr9szk 5nhXt6QBuz/HOQ/atarVN3zo0r91Iy6iD3s45DmeYw6GmiRd4jPOIJz1bDiq3cuvtMtQpGs4tvy3r4AIfkBw=</latexit> <latexit sha1_base64="qRhDd/iqodzt8wW6aLj/F0HCtZc=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LopsuKthVqKcl0WkPzYjJRShH8Abf6aeIf6F94Z5yCWkQnJDlz7j1n5t7rp2GQScd5LVg Li0vLK8XV0tr6xuZWeXunSW5YLzFkjARV76X8TCIeUsGMuRXqeBe5Ie84/PVLxzy0UWJPGlnKS8F3mjOBgGzJNEXYh+rV+uOFVHL3seu AZUYFYzKb/gGgMkYMgRgSOGJBzCQ0ZPFy4cpMT1MCVOEAp0nOMeJdLmlMUpwyN2TN8R7bqGjWmvPDOtZnRKSK8gpY0D0iSUJwir02wdz7W zYn/znmpPdbcJ/X3jFRErcUPsX7pZ5n91qhaJIU50DQHVlGpGVceMS67om5uf6lKkNKnMIDigvCTCtnfba1JtO1q956Ov6mMxWr9szk 5nhXt6QBuz/HOQ/atarVN3zo0r91Iy6iD3s45DmeYw6GmiRd4jPOIJz1bDiq3cuvtMtQpGs4tvy3r4AIfkBw=</latexit> <latexit sha1_base64="qRhDd/iqodzt8wW6aLj/F0HCtZc=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LopsuKthVqKcl0WkPzYjJRShH8Abf6aeIf6F94Z5yCWkQnJDlz7j1n5t7rp2GQScd5LVg Li0vLK8XV0tr6xuZWeXunSW5YLzFkjARV76X8TCIeUsGMuRXqeBe5Ie84/PVLxzy0UWJPGlnKS8F3mjOBgGzJNEXYh+rV+uOFVHL3seu AZUYFYzKb/gGgMkYMgRgSOGJBzCQ0ZPFy4cpMT1MCVOEAp0nOMeJdLmlMUpwyN2TN8R7bqGjWmvPDOtZnRKSK8gpY0D0iSUJwir02wdz7W zYn/znmpPdbcJ/X3jFRErcUPsX7pZ5n91qhaJIU50DQHVlGpGVceMS67om5uf6lKkNKnMIDigvCTCtnfba1JtO1q956Ov6mMxWr9szk 5nhXt6QBuz/HOQ/atarVN3zo0r91Iy6iD3s45DmeYw6GmiRd4jPOIJz1bDiq3cuvtMtQpGs4tvy3r4AIfkBw=</latexit> r1 <latexit sha1_base64="+HKrfPoENaBXTiFDqPoRvODNtH8="> ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl02VF+4BaSjKd1qF5MZkopQj+gFv9NPEP9C+8M6agFtEJSc6ce8+Zuf6SBS5T ivBWthcWl5pbhaWlvf2Nwqb+0jiTjDdZHMSy43spD0TEm0qogHcSyb3QD3jbH5/rePuWy1TE0ZWaJLwXeqNIDAXzFGXsu/2yxWn 6phlzwM3BxXkqxGX3CNAWIwZAjBEUERDuAhpacLFw4S4nqYEicJCRPnuEeJtBlcrwiB3Td0S7bs5GtNeqVEzOiWgV5LSxgFpYs qThPVptolnxlmzv3lPjae+24T+fu4VEqtwQ+xfulnmf3W6FoUhTk0NgmpKDKOrY7lLZrqib25/qUqRQ0KcxgOKS8LMKGd9to0mNbXr 3nom/mYyNav3LM/N8K5vSQN2f45zHrSOq5TdS+OK7WzfNRF7GEfhzTPE9RQRwN8h7hEU94tupWZGXW3WeqVcg1u/i2rIcP/7CQGw =</latexit> <latexit sha1_base64="+HKrfPoENaBXTiFDqPoRvODNtH8="> ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl02VF+4BaSjKd1qF5MZkopQj+gFv9NPEP9C+8M6agFtEJSc6ce8+Zuf6SBS5T ivBWthcWl5pbhaWlvf2Nwqb+0jiTjDdZHMSy43spD0TEm0qogHcSyb3QD3jbH5/rePuWy1TE0ZWaJLwXeqNIDAXzFGXsu/2yxWn 6phlzwM3BxXkqxGX3CNAWIwZAjBEUERDuAhpacLFw4S4nqYEicJCRPnuEeJtBlcrwiB3Td0S7bs5GtNeqVEzOiWgV5LSxgFpYs qThPVptolnxlmzv3lPjae+24T+fu4VEqtwQ+xfulnmf3W6FoUhTk0NgmpKDKOrY7lLZrqib25/qUqRQ0KcxgOKS8LMKGd9to0mNbXr 3nom/mYyNav3LM/N8K5vSQN2f45zHrSOq5TdS+OK7WzfNRF7GEfhzTPE9RQRwN8h7hEU94tupWZGXW3WeqVcg1u/i2rIcP/7CQGw =</latexit> <latexit sha1_base64="+HKrfPoENaBXTiFDqPoRvODNtH8="> ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl02VF+4BaSjKd1qF5MZkopQj+gFv9NPEP9C+8M6agFtEJSc6ce8+Zuf6SBS5T ivBWthcWl5pbhaWlvf2Nwqb+0jiTjDdZHMSy43spD0TEm0qogHcSyb3QD3jbH5/rePuWy1TE0ZWaJLwXeqNIDAXzFGXsu/2yxWn 6phlzwM3BxXkqxGX3CNAWIwZAjBEUERDuAhpacLFw4S4nqYEicJCRPnuEeJtBlcrwiB3Td0S7bs5GtNeqVEzOiWgV5LSxgFpYs qThPVptolnxlmzv3lPjae+24T+fu4VEqtwQ+xfulnmf3W6FoUhTk0NgmpKDKOrY7lLZrqib25/qUqRQ0KcxgOKS8LMKGd9to0mNbXr 3nom/mYyNav3LM/N8K5vSQN2f45zHrSOq5TdS+OK7WzfNRF7GEfhzTPE9RQRwN8h7hEU94tupWZGXW3WeqVcg1u/i2rIcP/7CQGw =</latexit> <latexit sha1_base64="+HKrfPoENaBXTiFDqPoRvODNtH8="> ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl02VF+4BaSjKd1qF5MZkopQj+gFv9NPEP9C+8M6agFtEJSc6ce8+Zuf6SBS5T ivBWthcWl5pbhaWlvf2Nwqb+0jiTjDdZHMSy43spD0TEm0qogHcSyb3QD3jbH5/rePuWy1TE0ZWaJLwXeqNIDAXzFGXsu/2yxWn 6phlzwM3BxXkqxGX3CNAWIwZAjBEUERDuAhpacLFw4S4nqYEicJCRPnuEeJtBlcrwiB3Td0S7bs5GtNeqVEzOiWgV5LSxgFpYs qThPVptolnxlmzv3lPjae+24T+fu4VEqtwQ+xfulnmf3W6FoUhTk0NgmpKDKOrY7lLZrqib25/qUqRQ0KcxgOKS8LMKGd9to0mNbXr 3nom/mYyNav3LM/N8K5vSQN2f45zHrSOq5TdS+OK7WzfNRF7GEfhzTPE9RQRwN8h7hEU94tupWZGXW3WeqVcg1u/i2rIcP/7CQGw =</latexit> r3 <latexit sha1_base64="PVjo9EN0aS8UfmF/1w4Hq/+JH/8=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIVdFl02VF+4BaSjKd1sG8mEyUgR/wK1+mvgH+hfeGVNQi+iEJGfOvefM3Hv9JBCpcpzXgjU 3v7C4VFwurayurW+UN7daZxJxpsDmLZ8b2UByLiTSVUwDuJ5F7oB7zt35zpePuWy1TE0aUaJ7wXeqNIDAXzFEXsn/YL1ecqmOWPQvcH FSQr0ZcfsEVBojBkCERwRFOICHlJ4uXDhIiOthQpwkJEyc4x4l0maUxSnDI/aGviPadXM2or32TI2a0SkBvZKUNvZIE1OeJKxPs08M86 a/c17Yjz13cb093OvkFiFa2L/0k0z/6vTtSgMcWJqEFRTYhdHctdMtMVfXP7S1WKHBLiNB5QXBJmRjnts20qald9Yz8TeTqVm9Z3lu hnd9Sxqw+3Ocs6B1UHWdqnt+VKmd5qMuYge72Kd5HqOGOhpokvcIj3jCs1W3Iiuz7j5TrUKu2ca3ZT18AR/kB0=</latexit> <latexit sha1_base64="PVjo9EN0aS8UfmF/1w4Hq/+JH/8=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIVdFl02VF+4BaSjKd1sG8mEyUgR/wK1+mvgH+hfeGVNQi+iEJGfOvefM3Hv9JBCpcpzXgjU 3v7C4VFwurayurW+UN7daZxJxpsDmLZ8b2UByLiTSVUwDuJ5F7oB7zt35zpePuWy1TE0aUaJ7wXeqNIDAXzFEXsn/YL1ecqmOWPQvcH FSQr0ZcfsEVBojBkCERwRFOICHlJ4uXDhIiOthQpwkJEyc4x4l0maUxSnDI/aGviPadXM2or32TI2a0SkBvZKUNvZIE1OeJKxPs08M86 a/c17Yjz13cb093OvkFiFa2L/0k0z/6vTtSgMcWJqEFRTYhdHctdMtMVfXP7S1WKHBLiNB5QXBJmRjnts20qald9Yz8TeTqVm9Z3lu hnd9Sxqw+3Ocs6B1UHWdqnt+VKmd5qMuYge72Kd5HqOGOhpokvcIj3jCs1W3Iiuz7j5TrUKu2ca3ZT18AR/kB0=</latexit> <latexit sha1_base64="PVjo9EN0aS8UfmF/1w4Hq/+JH/8=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIVdFl02VF+4BaSjKd1sG8mEyUgR/wK1+mvgH+hfeGVNQi+iEJGfOvefM3Hv9JBCpcpzXgjU 3v7C4VFwurayurW+UN7daZxJxpsDmLZ8b2UByLiTSVUwDuJ5F7oB7zt35zpePuWy1TE0aUaJ7wXeqNIDAXzFEXsn/YL1ecqmOWPQvcH FSQr0ZcfsEVBojBkCERwRFOICHlJ4uXDhIiOthQpwkJEyc4x4l0maUxSnDI/aGviPadXM2or32TI2a0SkBvZKUNvZIE1OeJKxPs08M86 a/c17Yjz13cb093OvkFiFa2L/0k0z/6vTtSgMcWJqEFRTYhdHctdMtMVfXP7S1WKHBLiNB5QXBJmRjnts20qald9Yz8TeTqVm9Z3lu hnd9Sxqw+3Ocs6B1UHWdqnt+VKmd5qMuYge72Kd5HqOGOhpokvcIj3jCs1W3Iiuz7j5TrUKu2ca3ZT18AR/kB0=</latexit> <latexit sha1_base64="PVjo9EN0aS8UfmF/1w4Hq/+JH/8=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIVdFl02VF+4BaSjKd1sG8mEyUgR/wK1+mvgH+hfeGVNQi+iEJGfOvefM3Hv9JBCpcpzXgjU 3v7C4VFwurayurW+UN7daZxJxpsDmLZ8b2UByLiTSVUwDuJ5F7oB7zt35zpePuWy1TE0aUaJ7wXeqNIDAXzFEXsn/YL1ecqmOWPQvcH FSQr0ZcfsEVBojBkCERwRFOICHlJ4uXDhIiOthQpwkJEyc4x4l0maUxSnDI/aGviPadXM2or32TI2a0SkBvZKUNvZIE1OeJKxPs08M86 a/c17Yjz13cb093OvkFiFa2L/0k0z/6vTtSgMcWJqEFRTYhdHctdMtMVfXP7S1WKHBLiNB5QXBJmRjnts20qald9Yz8TeTqVm9Z3lu hnd9Sxqw+3Ocs6B1UHWdqnt+VKmd5qMuYge72Kd5HqOGOhpokvcIj3jCs1W3Iiuz7j5TrUKu2ca3ZT18AR/kB0=</latexit> r4 <latexit sha1_base64="Rb73XpS40RWqvcrK/Dm2IXSZ2A=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIp6LopsuKthVqKcl0WofmxWSilCL4A27108Q/0L/wzpiCWkQnJDlz7j1n5t7rJ4FIleO8Fqy FxaXleJqaW19Y3OrvL3TuNMt5icRDLK9LeSAi3lJCBfwqkdwL/YB3/PGZjnduUxFHF2qScJ7oTeKxFAwTxF1Ifu1frniVB2z7Hng5 qCfDXj8guMUAMhgwhOCIowgE8pPR04cJBQlwPU+IkIWHiHPcokTajLE4ZHrFj+o5o183ZiPbaMzVqRqcE9EpS2jgTUx5krA+zTbxzDh r9jfvqfHUd5vQ38+9QmIVboj9SzfL/K9O16IwxImpQVBNiWF0dSx3yUxX9M3tL1UpckiI03hAcUmYGeWsz7bRpKZ23VvPxN9Mpmb1nuW5 Gd71LWnA7s9xzoP2UdV1qu5rVI/zUdxB72cUjzPEYdDTRIu8RHvGEZ6thRVZm3X2mWoVcs4tvy3r4AbfkB4=</latexit> <latexit sha1_base64="Rb73XpS40RWqvcrK/Dm2IXSZ2A=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIp6LopsuKthVqKcl0WofmxWSilCL4A27108Q/0L/wzpiCWkQnJDlz7j1n5t7rJ4FIleO8Fqy FxaXleJqaW19Y3OrvL3TuNMt5icRDLK9LeSAi3lJCBfwqkdwL/YB3/PGZjnduUxFHF2qScJ7oTeKxFAwTxF1Ifu1frniVB2z7Hng5 qCfDXj8guMUAMhgwhOCIowgE8pPR04cJBQlwPU+IkIWHiHPcokTajLE4ZHrFj+o5o183ZiPbaMzVqRqcE9EpS2jgTUx5krA+zTbxzDh r9jfvqfHUd5vQ38+9QmIVboj9SzfL/K9O16IwxImpQVBNiWF0dSx3yUxX9M3tL1UpckiI03hAcUmYGeWsz7bRpKZ23VvPxN9Mpmb1nuW5 Gd71LWnA7s9xzoP2UdV1qu5rVI/zUdxB72cUjzPEYdDTRIu8RHvGEZ6thRVZm3X2mWoVcs4tvy3r4AbfkB4=</latexit> <latexit sha1_base64="Rb73XpS40RWqvcrK/Dm2IXSZ2A=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIp6LopsuKthVqKcl0WofmxWSilCL4A27108Q/0L/wzpiCWkQnJDlz7j1n5t7rJ4FIleO8Fqy FxaXleJqaW19Y3OrvL3TuNMt5icRDLK9LeSAi3lJCBfwqkdwL/YB3/PGZjnduUxFHF2qScJ7oTeKxFAwTxF1Ifu1frniVB2z7Hng5 qCfDXj8guMUAMhgwhOCIowgE8pPR04cJBQlwPU+IkIWHiHPcokTajLE4ZHrFj+o5o183ZiPbaMzVqRqcE9EpS2jgTUx5krA+zTbxzDh r9jfvqfHUd5vQ38+9QmIVboj9SzfL/K9O16IwxImpQVBNiWF0dSx3yUxX9M3tL1UpckiI03hAcUmYGeWsz7bRpKZ23VvPxN9Mpmb1nuW5 Gd71LWnA7s9xzoP2UdV1qu5rVI/zUdxB72cUjzPEYdDTRIu8RHvGEZ6thRVZm3X2mWoVcs4tvy3r4AbfkB4=</latexit> <latexit sha1_base64="Rb73XpS40RWqvcrK/Dm2IXSZ2A=">A ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIp6LopsuKthVqKcl0WofmxWSilCL4A27108Q/0L/wzpiCWkQnJDlz7j1n5t7rJ4FIleO8Fqy FxaXleJqaW19Y3OrvL3TuNMt5icRDLK9LeSAi3lJCBfwqkdwL/YB3/PGZjnduUxFHF2qScJ7oTeKxFAwTxF1Ifu1frniVB2z7Hng5 qCfDXj8guMUAMhgwhOCIowgE8pPR04cJBQlwPU+IkIWHiHPcokTajLE4ZHrFj+o5o183ZiPbaMzVqRqcE9EpS2jgTUx5krA+zTbxzDh r9jfvqfHUd5vQ38+9QmIVboj9SzfL/K9O16IwxImpQVBNiWF0dSx3yUxX9M3tL1UpckiI03hAcUmYGeWsz7bRpKZ23VvPxN9Mpmb1nuW5 Gd71LWnA7s9xzoP2UdV1qu5rVI/zUdxB72cUjzPEYdDTRIu8RHvGEZ6thRVZm3X2mWoVcs4tvy3r4AbfkB4=</latexit> r0 1 <latexit sha1_base64="4m8jfNkQ4Lfet67GqouKcOIn5Q="> ACx3icjVHLSsNAFD2Nr1pfVZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/8M4BbWITkhy5tx7zsy9109Cnk rHec1ZC4tLyv51cLa+sbmVnF7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8ji6kZOEtUdeP+I9HnhSUeKw43aK Jafs6GXPA9eAEsyqxsUX3KLGAEyjMAQRIO4SGlpwUXDhLi2pgSJwhxHWe4R4G0GWUxyvCIHdK3T7uWYSPaK89UqwM6JaRXkNLGAW liyhOE1Wm2jmfaWbG/eU+1p7rbhP6+8RoRKzEg9i/dLPO/OlWLRA9nugZONSWaUdUFxiXTXVE3t79UJckhIU7hLsUF4UArZ32tSbV tavejr+pjMVq/aByc3wrm5JA3Z/jnMe1I/LrlN2r09KlXMz6jz2sI8jmucpKrhEFTXyHuART3i2rqzYGlt3n6lWzmh28W1ZDx+C45 BM</latexit> <latexit sha1_base64="4m8jfNkQ4Lfet67GqouKcOIn5Q="> ACx3icjVHLSsNAFD2Nr1pfVZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/8M4BbWITkhy5tx7zsy9109Cnk rHec1ZC4tLyv51cLa+sbmVnF7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8ji6kZOEtUdeP+I9HnhSUeKw43aK Jafs6GXPA9eAEsyqxsUX3KLGAEyjMAQRIO4SGlpwUXDhLi2pgSJwhxHWe4R4G0GWUxyvCIHdK3T7uWYSPaK89UqwM6JaRXkNLGAW liyhOE1Wm2jmfaWbG/eU+1p7rbhP6+8RoRKzEg9i/dLPO/OlWLRA9nugZONSWaUdUFxiXTXVE3t79UJckhIU7hLsUF4UArZ32tSbV tavejr+pjMVq/aByc3wrm5JA3Z/jnMe1I/LrlN2r09KlXMz6jz2sI8jmucpKrhEFTXyHuART3i2rqzYGlt3n6lWzmh28W1ZDx+C45 BM</latexit> <latexit sha1_base64="4m8jfNkQ4Lfet67GqouKcOIn5Q="> ACx3icjVHLSsNAFD2Nr1pfVZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/8M4BbWITkhy5tx7zsy9109Cnk rHec1ZC4tLyv51cLa+sbmVnF7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8ji6kZOEtUdeP+I9HnhSUeKw43aK Jafs6GXPA9eAEsyqxsUX3KLGAEyjMAQRIO4SGlpwUXDhLi2pgSJwhxHWe4R4G0GWUxyvCIHdK3T7uWYSPaK89UqwM6JaRXkNLGAW liyhOE1Wm2jmfaWbG/eU+1p7rbhP6+8RoRKzEg9i/dLPO/OlWLRA9nugZONSWaUdUFxiXTXVE3t79UJckhIU7hLsUF4UArZ32tSbV tavejr+pjMVq/aByc3wrm5JA3Z/jnMe1I/LrlN2r09KlXMz6jz2sI8jmucpKrhEFTXyHuART3i2rqzYGlt3n6lWzmh28W1ZDx+C45 BM</latexit> <latexit sha1_base64="4m8jfNkQ4Lfet67GqouKcOIn5Q="> ACx3icjVHLSsNAFD2Nr1pfVZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/8M4BbWITkhy5tx7zsy9109Cnk rHec1ZC4tLyv51cLa+sbmVnF7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8ji6kZOEtUdeP+I9HnhSUeKw43aK Jafs6GXPA9eAEsyqxsUX3KLGAEyjMAQRIO4SGlpwUXDhLi2pgSJwhxHWe4R4G0GWUxyvCIHdK3T7uWYSPaK89UqwM6JaRXkNLGAW liyhOE1Wm2jmfaWbG/eU+1p7rbhP6+8RoRKzEg9i/dLPO/OlWLRA9nugZONSWaUdUFxiXTXVE3t79UJckhIU7hLsUF4UArZ32tSbV tavejr+pjMVq/aByc3wrm5JA3Z/jnMe1I/LrlN2r09KlXMz6jz2sI8jmucpKrhEFTXyHuART3i2rqzYGlt3n6lWzmh28W1ZDx+C45 BM</latexit> r0 2 <latexit sha1_base64="FOAVHKzPyW6Y8aTzoMcI/BxlUVU=">A ACx3icjVHLSsNAFD2Nr1pfVZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/8M4BbWITkhy5tx7zsy9109CnkrHec1 ZC4tLyv51cLa+sbmVnF7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8ji6kZOEtUdeP+I9HnhSUeKwc9wplpyo5c9D 1wDSjCrGhdfcIsuYgTIMAJDBEk4hIeUnhZcOEiIa2NKnCDEdZzhHgXSZpTFKMjdkjfPu1aho1orzxTrQ7olJBeQUobB6SJKU8QVqfZOp5 pZ8X+5j3VnupuE/r7xmtErMSA2L90s8z/6lQtEj2c6Ro41ZRoRlUXGJdMd0Xd3P5SlSHhDiFuxQXhAOtnPXZ1pU1656+n4m85UrNoH JjfDu7olDdj9Oc5UD8u07ZvT4pVc7NqPYwz6OaJ6nqOASVdTIe4BHPOHZurJia2zdfaZaOaPZxbdlPXwAhUOQTQ=</latexit> <latexit sha1_base64="FOAVHKzPyW6Y8aTzoMcI/BxlUVU=">A ACx3icjVHLSsNAFD2Nr1pfVZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/8M4BbWITkhy5tx7zsy9109CnkrHec1 ZC4tLyv51cLa+sbmVnF7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8ji6kZOEtUdeP+I9HnhSUeKwc9wplpyo5c9D 1wDSjCrGhdfcIsuYgTIMAJDBEk4hIeUnhZcOEiIa2NKnCDEdZzhHgXSZpTFKMjdkjfPu1aho1orzxTrQ7olJBeQUobB6SJKU8QVqfZOp5 pZ8X+5j3VnupuE/r7xmtErMSA2L90s8z/6lQtEj2c6Ro41ZRoRlUXGJdMd0Xd3P5SlSHhDiFuxQXhAOtnPXZ1pU1656+n4m85UrNoH JjfDu7olDdj9Oc5UD8u07ZvT4pVc7NqPYwz6OaJ6nqOASVdTIe4BHPOHZurJia2zdfaZaOaPZxbdlPXwAhUOQTQ=</latexit> <latexit sha1_base64="FOAVHKzPyW6Y8aTzoMcI/BxlUVU=">A ACx3icjVHLSsNAFD2Nr1pfVZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/8M4BbWITkhy5tx7zsy9109CnkrHec1 ZC4tLyv51cLa+sbmVnF7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8ji6kZOEtUdeP+I9HnhSUeKwc9wplpyo5c9D 1wDSjCrGhdfcIsuYgTIMAJDBEk4hIeUnhZcOEiIa2NKnCDEdZzhHgXSZpTFKMjdkjfPu1aho1orzxTrQ7olJBeQUobB6SJKU8QVqfZOp5 pZ8X+5j3VnupuE/r7xmtErMSA2L90s8z/6lQtEj2c6Ro41ZRoRlUXGJdMd0Xd3P5SlSHhDiFuxQXhAOtnPXZ1pU1656+n4m85UrNoH JjfDu7olDdj9Oc5UD8u07ZvT4pVc7NqPYwz6OaJ6nqOASVdTIe4BHPOHZurJia2zdfaZaOaPZxbdlPXwAhUOQTQ=</latexit> <latexit sha1_base64="FOAVHKzPyW6Y8aTzoMcI/BxlUVU=">A ACx3icjVHLSsNAFD2Nr1pfVZdugkV0VRIRdFl0o7sK9gG1lCSdtkPTJEwmxVJc+ANu9c/EP9C/8M4BbWITkhy5tx7zsy9109CnkrHec1 ZC4tLyv51cLa+sbmVnF7p57GmQhYLYjDWDR9L2Uhj1hNchmyZiKYN/JD1vCHFyreGDOR8ji6kZOEtUdeP+I9HnhSUeKwc9wplpyo5c9D 1wDSjCrGhdfcIsuYgTIMAJDBEk4hIeUnhZcOEiIa2NKnCDEdZzhHgXSZpTFKMjdkjfPu1aho1orzxTrQ7olJBeQUobB6SJKU8QVqfZOp5 pZ8X+5j3VnupuE/r7xmtErMSA2L90s8z/6lQtEj2c6Ro41ZRoRlUXGJdMd0Xd3P5SlSHhDiFuxQXhAOtnPXZ1pU1656+n4m85UrNoH JjfDu7olDdj9Oc5UD8u07ZvT4pVc7NqPYwz6OaJ6nqOASVdTIe4BHPOHZurJia2zdfaZaOaPZxbdlPXwAhUOQTQ=</latexit> 0.6 0.0 0.2 0.4 0.2 0.6 0.4 Shared Parameter 0.6 0.0 0.2 0.4 0.2 0.6 0.4 r0 3 <latexit sha1_base64="ln4rSnI8iFfpPc7T5/gDjG5KRw8=">A ACx3icjVHLTsJAFD3UF+ILdemkRhdkRYI4I7oRneYCJIgIW0ZoKGvTKdEQlz4A271z4x/oH/hnbEkuiA6Tds75zZu69duS5sTCM94y 2srq2vpHdzG1t7+zu5fcP2nGYcIe1nNALece2Yua5AWsJV3isE3Fm+bH7uzJpczfTRmP3TC4FbOI9XxrFLhD17GEhPhpv9zPF4zieb1a qlR1o2gYNbNkyqBUq5QrukmIXAWkqxnm3CPAUI4SOCDIYCg2IOFmJ4uTBiICOthThinyFV5hkfkSJsQixHDInRC3xHtuika0F56xkrt0C kevZyUOk5IExKPUyxP01U+Uc4SXeY9V57ybjP626mXT6jAmNC/dAvmf3WyFoEh6qoGl2qKFCKrc1KXRHVF3lz/UZUgh4gwGQ8ozyl2lHL RZ1pYlW7K2l8h+KVG5d1Jugk95SxrwYor68qBdKpG0bypFBoX6aizOMIxzmieNTRwhSZa5D3GM17wql1roTbVHr6pWibVHOLX0p6+A CB5kJA=</latexit> <latexit sha1_base64="ln4rSnI8iFfpPc7T5/gDjG5KRw8=">A ACx3icjVHLTsJAFD3UF+ILdemkRhdkRYI4I7oRneYCJIgIW0ZoKGvTKdEQlz4A271z4x/oH/hnbEkuiA6Tds75zZu69duS5sTCM94y 2srq2vpHdzG1t7+zu5fcP2nGYcIe1nNALece2Yua5AWsJV3isE3Fm+bH7uzJpczfTRmP3TC4FbOI9XxrFLhD17GEhPhpv9zPF4zieb1a qlR1o2gYNbNkyqBUq5QrukmIXAWkqxnm3CPAUI4SOCDIYCg2IOFmJ4uTBiICOthThinyFV5hkfkSJsQixHDInRC3xHtuika0F56xkrt0C kevZyUOk5IExKPUyxP01U+Uc4SXeY9V57ybjP626mXT6jAmNC/dAvmf3WyFoEh6qoGl2qKFCKrc1KXRHVF3lz/UZUgh4gwGQ8ozyl2lHL RZ1pYlW7K2l8h+KVG5d1Jugk95SxrwYor68qBdKpG0bypFBoX6aizOMIxzmieNTRwhSZa5D3GM17wql1roTbVHr6pWibVHOLX0p6+A CB5kJA=</latexit> <latexit sha1_base64="ln4rSnI8iFfpPc7T5/gDjG5KRw8=">A ACx3icjVHLTsJAFD3UF+ILdemkRhdkRYI4I7oRneYCJIgIW0ZoKGvTKdEQlz4A271z4x/oH/hnbEkuiA6Tds75zZu69duS5sTCM94y 2srq2vpHdzG1t7+zu5fcP2nGYcIe1nNALece2Yua5AWsJV3isE3Fm+bH7uzJpczfTRmP3TC4FbOI9XxrFLhD17GEhPhpv9zPF4zieb1a qlR1o2gYNbNkyqBUq5QrukmIXAWkqxnm3CPAUI4SOCDIYCg2IOFmJ4uTBiICOthThinyFV5hkfkSJsQixHDInRC3xHtuika0F56xkrt0C kevZyUOk5IExKPUyxP01U+Uc4SXeY9V57ybjP626mXT6jAmNC/dAvmf3WyFoEh6qoGl2qKFCKrc1KXRHVF3lz/UZUgh4gwGQ8ozyl2lHL RZ1pYlW7K2l8h+KVG5d1Jugk95SxrwYor68qBdKpG0bypFBoX6aizOMIxzmieNTRwhSZa5D3GM17wql1roTbVHr6pWibVHOLX0p6+A CB5kJA=</latexit> <latexit sha1_base64="ln4rSnI8iFfpPc7T5/gDjG5KRw8=">A ACx3icjVHLTsJAFD3UF+ILdemkRhdkRYI4I7oRneYCJIgIW0ZoKGvTKdEQlz4A271z4x/oH/hnbEkuiA6Tds75zZu69duS5sTCM94y 2srq2vpHdzG1t7+zu5fcP2nGYcIe1nNALece2Yua5AWsJV3isE3Fm+bH7uzJpczfTRmP3TC4FbOI9XxrFLhD17GEhPhpv9zPF4zieb1a qlR1o2gYNbNkyqBUq5QrukmIXAWkqxnm3CPAUI4SOCDIYCg2IOFmJ4uTBiICOthThinyFV5hkfkSJsQixHDInRC3xHtuika0F56xkrt0C kevZyUOk5IExKPUyxP01U+Uc4SXeY9V57ybjP626mXT6jAmNC/dAvmf3WyFoEh6qoGl2qKFCKrc1KXRHVF3lz/UZUgh4gwGQ8ozyl2lHL RZ1pYlW7K2l8h+KVG5d1Jugk95SxrwYor68qBdKpG0bypFBoX6aizOMIxzmieNTRwhSZa5D3GM17wql1roTbVHr6pWibVHOLX0p6+A CB5kJA=</latexit> Rule Inference and Transfer Multi-channel Graph Neural Network GNN Encoder GNN Encoder GNN Encoder GNN Encoder Relation Weighting Align Model Figure 2: Framework. Rectangles denote two main steps, and rounded rectangles denote the key components of the corresponding step. After rule inference and transfer, we utilize rules to complete each KG, denoted by dashed lines r′ 3. Through relation weighting, we obtain multiple weighted graphs for different GNN channels, in which relation r4 is weighted to 0.0 that prunes exclusive entities. These channels are combined as the input for align model for alignment-oriented KG embeddings. to find as many alignments as possible Ae = {(e, e′) ∈E × E′|e ↔e′} for which an equivalent relation ↔holds between e and e′. That is, e and e′ are in different KGs, but denote the same thing. As shown in Figure 1, Jilin City in English Wikipedia (i.e., KG1) and in Chinese Wikipedia (i.e., KG2) has different structures, but denote the same Chinese city. Normally, some prior alignments of entities As e and relations As r = {(r, r′) ∈ R × R′|r ↔r′} can be easily obtained manually or by simple lexicon-based methods (e.g., entity title translation), namely seed alignments (seed for short). We use bold-face letters to denote the vector representations of the corresponding terms throughout of the paper. 2.2 Framework MuGNN aims at learning alignment-oriented KG embeddings for entity alignment. It introduces KG inference and transfer to explicitly complete KGs, and utilizes different relation weighting schemes: KG self-attention and cross-KG attention, to encode KGs robustly. As shown in Figure 2, there are two main steps in our framework: KG Completion aims at reconciling the structural differences by completing the missing relations. It not only induces rules by using a popular rule mining system AMIE+ (Gal´arraga et al., 2015), but also transfers them into each other based on seed aligned relations between KGs. Rule transferring is based on the assumption that knowledge can be generalized into various KGs, no matter in which languages or domains. Multi-channel Graph Neural Network is to encode each KG through different channels. The channels enhance the entity embeddings from different perspectives: towards completion and pruning, so that the entities and their counterparts have similar structures. MuGNN contains three main components: (1) relation weighting that generates weight matrix for each KG according to two schemes: KG self-attention and cross-KG attention. Each type of attention refers to a GNN channel that shares parameters between KGs for structural knowledge transfer; (2) GNN encoder to model the entire graph features by improving entity embeddings with its neighbors, thus the seed alignment information shall be propagated over the entire graph; We combine the outputs of GNN encoders in different channels via pooling techniques as the input of (3) Align model, which embeds two KGs into a unified vector space by pushing the aligned entities (and relations) of seeds together. 3 KG Completion In this section, we introduce how to utilize rule knowledge to explicitly complete KG, which first infers rules from each KG, then transfers these rules between KGs based on knowledge invariant assumption, and finally grounds rules in each KG for consistent completion. 1455 3.1 Rule Inference and Transfer Since the acquirement of rule knowledge is not our focus in this paper, we utilize AMIE+ (Gal´arraga et al., 2015), a modern rule mining system, to efficiently find Horn rules from large-scale KG, such as marriedTo(x, y) ∧liveIn(x, z) ⇒liveIn(y, z). Its source code is available online1. Formally, given two KGs G and G′, we first mine rules separately and obtain two sets of rule knowledge K and K′. These rule knowledge are quite different since KGs are constructed to meet different demands of applications or languages. Although they can be used to complete their own KGs separately, we further transfer the two sets of rules into each other through Knowledge Invariant Assumption: Knowledge has universality no matter in which languages or domains. Given aligned relations As r and a rule k ∈K, we replace all relations involved in the rule k = (rc|rs1, · · · , rsp) with its counterparts if there are (rc, r′ c), (rsi, r′ si) ∈As r, i = 1, · · · , p. Thus, we obtain such a rule k′ = (r′ c|r′ s1, · · · , r′ sp) and add it to ˜K′ = K′ ∪k′ if k′ /∈K′. Real examples of transferred rules can be found in experiments. Note that there may be no transfered rules if aligned relations can not be found As r = ∅. 3.2 Rule Grounding We now ground each rule sets on the corresponding KG for completion, which not only accelerates the efficiency of align model through denser KG for propagation, but also adds extra constraints that is helpful for high-quality entity embedding learning. Take KG G as an example, given a rule k ∈ K, we collect its grounds that the premise triples can be found in the KG, but not the conclusion triplet: G(k) = {g(k)|ts1, · · · , tsp ∈T, tc /∈T}. Thus, we add all conclusion triples into the KG ˜G = G∪tc, tc ∈G(k). Similarly, we can complete KG G′ to ˜G′. As shown in Figure 1, we obtain the rule province(x, y) ∧dialect(y, z) ⇒dialect(x, z) from the informative KG2, then transfer it to KG1 based on the aligned relation province and dialect. Thus, in KG1, we find the suitable triplets province(Jilin City, Jilin) ∧dialect(Jilin, Northeastern Mandarin), thus obtain a new triplet dialect(Jilin City, Northeastern Mandarin). 1https://www.mpi-inf.mpg.de/ It is worth noting that the inferred rules do not hold in all cases, and maybe we can consider the confidence value for each grounding. We leave it in future work. 4 Multi-Channel Graph Neural Network In this section, we describe the three main components involved in MuGNN to encode different graphs towards alignment-oriented embedding learning: relation weighting, multi-channel GNN encoder and align model. 4.1 Relation Weighting Relation weighting is to generate weighted connectivity matrix A based on a graph G as the input structural features of GNN encoder, which will be detailed later. Each element aij in the matrix denote the weighted relation between ei and ej. As mentioned in Section 1, there are two types of structure differences: the missing relations due to the incompleteness nature of KG, and the exclusive entities caused by different construction demands of applications or languages. We utilize two channels of GNN encoder for each KG, so as to reconcile the two types of differences separately. That is, we generate two adjacency matrices for each channel: A1 based on KG selfattention and A2 based on cross-KG attention. Next, we will describe how to compute each element aij in A1 and A2. Similarly, we can obtain A′ 1 and A′ 2 for KG G′. KG Self-Attention KG self-attention aims at making better use of seed alignments based on the KG structure itself. This component selects informative neighbors according to the current entity and assigns them with high weights. Following GAT (Velickovic et al., 2018), we define the normalized element aij in A1 representing the connectivity from entity ei to ej as follows: aij = softmax(cij) = exp(cij) P ek∈Nei∪ei exp(cik) (1) where ek ∈Nei ∪{ei} denotes neighbors of ei with self-loop, and cij is the attention coefficient measuring the importance of ei to ej and is calcu1456 lated by an attention function attn as follows: cij = attn(Wei, Wej) = LeakyReLU(p[Wei||Wej]) (2) where || indicates vector concatenation, W and p are trainable parameters. Cross-KG Attention Cross-KG Attention aims at modeling the common subgraph of two KGs as structural features towards consistency. It prunes exclusive entities by assigning lower weights for corresponding relations that have no counterparts in another KG. We define the aij in A2 as follows: aij = max r∈R,r′∈R′1((ei, r, ej) ∈T)sim(r, r′) (3) where 1(·) indicates 1 if holds true, otherwise 0. sim(·) is a similarity measure between relation types and is defined as inner-product sim(r, r′) = rT r′. Thus, aij is to find the best mapping between two KGs, and shall be 0 if there is no such relation types for exclusive entities. 4.2 Multi-Channel GNN Encoder GNN is a type of neural network model that deals with graph-structured data, the main idea of which is similar to a propagation model: to enhance the features of a node (i.e., entity) according to its neighbor nodes. Thus, we may stack multiple L layers of GNNs to achieve further propagation. One of its variants is based on spectral graph convolutions, such as GCN (Kipf and Welling, 2017). Every GNN encoder takes the hidden states of node representations in the current layer as inputs, and computes new node representations as: GNN(A, H, W) = σ(AHW) (4) where A is an adjacency matrix showing the connectivity between nodes, H is the current node representations, W is the learned parameters, and σ is the activation function chosen as ReLU(·) = max(0, ·). Inspired by the multi-head attention networks (Velickovic et al., 2018), we use the two above-mentioned strategies to calculate connectivity matrices as different channels to propagate information from different aspects and aggregate them with a Pooling function. As for our multichannel GNN encoder, it is built by stacking multiple GNN encoder defined as: MultiGNN(Hl;A1, · · · , Ac) = Pooling(Hl+1 1 , · · · , Hl+1 c ) (5) where c is the number of the channels, Ai is the connectivity matrices in the ith channel, and Hl+1 i is the computed hidden states in the (l +1)th layer and ith channel, which can be formulated as: Hl+1 i = GNN(Ai, Hl, Wi) (6) where Wi is the weight parameters in the ith channel. Here, we set i = 1, 2 referring to the above two attention schemes. We set H0 as the entity embeddings initialized randomly. In experiments, we select average pooling techniques for Pooling function due to its superior performance. We use such multi-channel GNN encoders to encode each KG, and obtain HL, H′L representing the enhanced entity embeddings, where each channel shares parameters W1 = W ′ 1 and W2 = W ′ 2 for structural knowledge transferring. 4.3 Align Model Align model is to embed two KGs into a unified vector space by pushing the seed alignments of entities (and relations) together. We judge whether two entities or two relations are equivalent by the distance between them. The objective of the align model is given as below: La = X (e,e′ )∈Ase X (e−,e′ −)∈Ase− [d(e, e ′) + γ1 −d(e−, e ′ −)]++ X (r,r′ )∈Asr X (r−,r′ −)∈Asr− [d(r, r ′) + γ2 −d(r−, r ′ −)]+ (7) where [·]+ = max{0, ·} represents the maximum between 0 and the input, d(·) = || · ||2 is the distance measure chosen as L2 distance, As e −and As r −represents for the negative pair set of As e and As r, respectively, and γ1 > 0 and γ2 > 0 are margin hyper-parameters separating positive and negative entity and relation alignments. During the experiments, by calculating cosine similarity, we select 25 entities closest to the corresponding entity in the same KG as negative samples (Sun et al., 2018). Negative samples will be re-calculated every 5 epochs. 1457 Rule Knowledge Constraints Since we have changed the KG structure by adding new triplets (i.e., grounded rules), we also introduce the triplet loss to hold the grounded rules as valid in the unified vector space. Taking KG G as an example, following Guo et al. (2016), we define the loss function as follows: Lr = X g+∈G(K) X g−∈G−(K) [γr −I(g+) + I(g−)]+ + X t+∈T X t−∈T − [γr −I(t+) + I(t−)]+ (8) where g is short for rule grounding g(k), G(K) and T denote all rule grounds and all triplets. G−(K) and T −are negative sample sets obtained by replacing one of the involved entity using nearest sampling (Sun et al., 2018). I(·) is the true value function for triplet t: I(t) = 1 − 1 3 √ d ||ei + rij −ej||2 (9) or for grounding g = (tc|ts1, · · · , tsp), which is recursively calculated by: I(ts) = I(ts1 ∧ts2) = I(ts1) · I(ts2) I(ts ⇒tc) = I(ts) · I(tc) −I(ts) + 1 (10) where d is the embedding size. Similarly, we obtain the loss L′ r for KG G′. Thus, the overall loss function for multi-channel GNN is as follows: L = La + L′ r + Lr (11) 5 Experiment In this section, we conduct experiments on five publicly available datasets involving both different language pairs and sources. We further investigate the key components of MuGNN and analyze how the knowledge inference and transfer mechanism contribute to KG alignment. 5.1 Experiment Settings Datasets Following Sun et al. (2017, 2018), we conduct experiments on benchmark datasets DBP15K and DWY100K. DBP15K contains three cross-lingual datasets: DBPZH-EN(Chinese to English), DBPJA-EN (Japanese to English), and DBPFREN (French to English). All the above datasets are extracted from multilingual DBpedia and include 15,000 entity pairs as seed alignments. DWY100K consists of two large-scale crossresource datasets: DWY-WD (DBpedia to Wikidata) and DWY-YG (DBpedia to YAGO3). Each dataset includes 100,000 alignments of entities in advance. As for the seed alignments of relations, we employ the official relation alignment list published by DBpedia for DWY100K. As for DWY-YG, we manually align the relations because there are only a small set of relation types (31) in YAGO3. The statistics2 is listed in Table 1. Datasets |As r| #Relation #Entity #Triple DBPZH 891 2,830 66,469 153,929 DBPEN 2,317 98,125 237,674 DBPJA 582 2,043 65,744 164,373 DBPEN 2,096 95,680 233,319 DBPFR 75 1,379 66,858 192,191 DBPEN 2,209 105,889 278,590 DWYDB 62 330 100,000 463,294 DWYWD 220 100,000 448,774 DWYDB 24 302 100,000 428,952 DWYYG 31 100,000 502,563 Table 1: Statistics of DBP15K and DWY100k. For each dataset, we employ AMIE+ for rule mining by setting the max number of premise as p = 2 and PCA confidence not less than 0.8. The statistical results of rules, transferred rules (Tr.Rule for short), ground triples and ground triples based on transferred rules (Tr.ground for short) are exhibited in Table 2. Datasets #Rule #Tr.Rule #Ground #Tr.ground DBPZH 2,279 1,058 46,959 19,278 DBPEN 1,906 578 78,450 24,018 DBPJA 1,440 651 61,733 25,337 DBPEN 1,316 259 77,614 17,838 DBPFR 1,263 25 77,342 1,527 DBPEN 1,252 12 75,338 1,364 DWYDB 843 40 281,271 13,136 DWYWD 630 51 184,010 56,373 DWYDB 503 4 277,031 92,923 DWYYG 39 16 129,334 10,446 Table 2: Statistics of KG inference and transfer. Baselines To investigate MuGNN’s ability on entity alignment, we select four competitive baselines including three translation based mod2|As r| denotes the number of seed alignments of relations. 1458 Methods DBPZH-EN DBPJA-EN DBPFR-EN DBP-WD DBP-YG H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR MTransE .308 .614 .364 .279 .575 .349 .244 .556 .335 .281 .520 .363 .252 .493 .334 JAPE .412 .745 .490 .363 .685 .476 .324 .667 .430 .318 .589 .411 .236 .484 .320 AlignEA .472 .792 .581 .448 .789 .563 .481 .824 .599 .566 .827 .655 .633 .848 .707 GCN-Align .413 .744 .549 .399 .745 .546 .373 .745 .532 .506 .772 .600 .597 .838 .682 MuGNN w/o As r .479 .833 .597 .487 .851 .604 .496 .869 .621 .590 .887 .693 .730 .934 .801 MuGNN .494 .844 .611 .501 .857 .621 .495 .870 .621 .616 .897 .714 .741 .937 .810 Table 3: Overall performance. els and one graph-based model for comparison. MTransE (Chen et al., 2017) trains independent embedding of knowledge graph with TransE, and assigns the entity pairs in seed alignments with similar embeddings by minimizing their Euclidean distances. JAPE (Sun et al., 2017) learns the representation of entities and relations from different KGs in a unified embedding space. It takes advantage of attribute triples to capture homogeneous entity properties cross KGs. GCNAlign (Wang et al., 2018b) employs Graph Convolution Networks to construct entity representation by propagating information from the neighborhood. AlignEA (Sun et al., 2018) swaps aligned entities in triples to calibrate the embedding of KGs in a unified embedding space. AlignEA is the up to date non-iterative state of the art model. Training Details Following Sun et al. (2017, 2018), we split 30% of entity seed alignments as training data and left the remaining data for testing. By convention, Hits@N and Mean Reciprocal Rank are used as evaluation metrics. Hits@N indicates the percentage of the targets that have been correctly ranked in top N (H in Table 3 for short). MRR is the average of the reciprocal of the rank results. Higher Hits@N and MRR refer to higher performance. To make a fair comparison, we set embedding size to 128 for MuGNN and all baselines. All graph models stack two layers of GNN. We utilize Adagrad (Duchi et al., 2011) as the optimizer. For the margins in MuGNN, we empirically set γ1 = 1.0 and γ2 = 1.0. We set γr = 0.12 to ensure rule knowledge constraints have less impact than the alignment model. Other hyperparameters are chosen by running an exhaustively search over the following possible values: learning rate in {0.1, 0.01, 0.001}, L2 in {0.01, 0.001, 0.0001}, dropout in {0.1, 0.2, 0.5}. The optimal configuration of MuGNN for entity alignment is: learning rate= 0.001, L2= 0.01, dropout = 0.2. We implement MuGNN with PyTorch-1.0. The experiments are conducted on a server with two 6core Intel Xeon E5-2620 [email protected] CPUs, two GeForce GTX TITAN X and 128 GB of memory. 500 epochs cost nearly one hour. 5.2 Overall Performance Table 3 shows the experimental results on DBP15K and DWY100K. In general, MuGNN significantly outperforms all baselines regarding all metrics, mainly because it reconciles the structural differences by two different schemes for KG completion and pruning, which are thus well modeled in multi-channel GNN. More specifically, on three small-scale crosslingual datasets, the average gains of MuGNN regarding Hits@1, Hits@10 and MRR are 3%, 6%, and 4%, respectively. While on largescale datasets, MuGNN achieves significant improvements (8%, 8% and 8% regarding Hits@1, Hits@10 and MRR, respectively). This is mainly because the large-scale datasets (e.g., DBP-YG) provide more prior knowledge (more than 3.5 facts per entity v.s. less than 2.5 facts in DBP15K) for rule mining, thus our proposed method has more capability in reconciling the structural differences between KGs, and makes better use of seed alignments. Since some methods do not rely on seed alignments of relations, we also test MuGNN without them, marked as MuGNN w/o As r. This also implies that we have no transferred rules between KGs. We can see that our method still performs competitively, and even achieves the best Hits@1 and MRR on DBPFR-EN. This is because the culture difference between French and English is much smaller than that between Chinese/Japanese and English, thus there is only a few exclusive rules mined from each KG, which can be transferred towards consistent completion (25 and 12 pieces of 1459 10.0% 15.0% 20.0% 25.0% 30.0% 35.0% 40.0% 45.0% 50.0% (A) DBPZH-EN 0 20 40 60 80 100 Hits@N 10.0% 15.0% 20.0% 25.0% 30.0% 35.0% 40.0% 45.0% 50.0% (B) DBPJA-EN 10.0% 15.0% 20.0% 25.0% 30.0% 35.0% 40.0% 45.0% 50.0% (C) DBPFR-EN Hits@1 Hits@10 AlignEA GCN-Align MuGNN Figure 3: Sensitivity to entity seed alignments (x-axis: proportion of seed alignments used for training). rules transferred between two KGs, as shown in Table 2). We also observe that GNN-based method (i.e., GCN-Align) performs better than translationbased methods except AlignEA. To better understand their advantages and disadvantages, we further conduct ablation study as follows. 5.3 Impact of Two Channels and Rule Transfer DBPZH-EN DBPJA-EN DBPFR-EN Hits@1 46 47 48 49 50 51 52 MuGNN MuGNN w/o CroAtt MuGNN w/o SelAtt MuGNN w/o RulTra DBPZH-EN DBPJA-EN DBPFR-EN MRR 0.56 0.57 0.58 0.59 0.60 0.61 0.62 0.63 0.64 Figure 4: Impact of two channels and rule transfer. The core components of MuGNN involve two channels based on KG self-attention and crossKG attention, and rule transfer towards consistent completion based on knowledge invariant assumption. We thus remove them from our model to to investigate their impacts to reconcile the structural differences, marked as MuGNN w/o SelAtt, MuGNN w/o CroAtt and MuGNN w/o RulTra. As shown in Figure 4, there is a performance drop in MuGNN w/o SelAtt and MuGNN w/o CroAtt as compared to MuGNN, which demonstrates the effectiveness of both channels. Specifically, the performance decrease more with the loss of cross-KG attention channel than that of KG self-attention, which implies the importance of utilizing cross-KG information for entity alignment. As for rule transfer, we can see that in most cases, it contributes much in performance. However, the performance difference between MuGNN and MuGNN w/o RulTra is negligible on DBPFR-EN. The reason is that the ground rule triple amounts for French and English datasets are limited (Table 2), which are less than 1% of the oracle triples. Therefore, rule transfer cannot provide sufficient cross-graph heterogeneous structure information. As a contrast, DBPJA-EN and DBPZH-EN provide more than 10k ground-rule triples, which gain decent performance improvements from rule transfer. 5.4 Impact of Seed Alignments To investigate the advantages and disadvantages between GNN-based method and translationbased methods, we test MuGNN, GCN-Align and AlignEA using different size of seed alignments. We gradually increase the proportion of entity seeds from 10% to 50%, and we can see the model’s sensitivity to seed alignments. As shown in Figure 3, GNN-based methods perform better than translation-based methods when there is only limited seeds available (10%), but perform worse along with the increase of seed alignments. This is because graph models can make better of seeds by propagating them over the entire structure, while they suffer from the heterogeneity between KGs due to the GNN’s sensitivity to structural differences, which lead to propagation errors aggregation. However, the performance of translation-based methods increases gradually along with the growing seeds since it can implicitly complete KG via knowledge representation learning, such as transE. MuGNN utilizes AMIE+ to explicitly complete two KGs via rule mining and transfer, which reconciles the structural differences; meanwhile, the GNN encoders make better use of seed information via two channels over the graphs. 1460 (U.S., leaderTitle, U.S.President) ∧(U.S.Secretary of State, reports to, U.S.President) ⇒(U.S.Secretary of State, seat, U.S.) (Chiang Kaishek,party,Kuomingtang) ∧(Chiang Weikuo,president,Chiang Kaishek) ⇒(Chiang Weikuo,party,Kuomintang) Table 4: Examples of groundings of transferred rules. 5.5 Qualitative Analysis We qualitatively analyze how the rule works by presenting the transferred rules and their groundings in Table 4. We can see the rule grounding in the first line indicates a common knowledge in the United States, which thus is easily mined in English KG DBPEN. Meanwhile, we find that such knowledge is missing in DBPZH, the Chinese KG. By transferring the corresponding rules from DBPEN to DBPZH, the asymmetric information is smoothed. Corresponding entities in Chinese DBPZH shall have a similar structure with their counterparts in English DBPEN, thus similar embeddings. That is, MuGNN indeed reconciles structural differences by rule transfer, and learns alignment-oriented embeddings. The second line presents a similar case that transfers a Chinese common rule knowledge into English KG. This demonstrates the effectiveness of rule transfer. Error Analysis: As shown in Table 2, the only 4 rules transfer from YAGO3 to DBpedia are grounded to 92,923 new ground rule triples, which is shocking and not informative. Further investigation finds that the rule (a, team, b) ⇒(a, affiliation, b) alone contributes 92,743 ground rule triples. Although the rule is logically correct, it is suspicious such a rule that establishes similar relations between entities would benefit entity alignment. We will deal with such noise in future. 6 Related Work Merging different KGs into a unified one has attracted much attention since it shall benefit many Knowledge-driven applications, such as information extraction (Cao et al., 2017a, 2018b), question answering (Zhang et al., 2015) and recommendation (Cao et al., 2019). Early approaches for entity alignment leverage various features to overcome the heterogeneity between KGs, such as machine translation and external lexicons (Suchanek et al., 2011; Wang et al., 2013). Following the success of KG representation learning, recent work embeds entities in different KGs into a low-dimensional vector space with the help of seed alignments (Chen et al., 2017). However, the limited seeds and structural differences take great negative impacts on the quality of KG embeddings, which performs alignment poorly. JAPE (Sun et al., 2017) and KDCoE (Chen et al., 2018) introduced attributes or descriptions information to improve entity embeddings, while IPTransE (Zhu et al., 2017) and BootEA (Sun et al., 2018) enlarged the seed set by selecting predicted alignments with high confidence iteratively. Clearly, the above strategies can be seen as a general enhancement for most alignment approaches (Sun et al., 2018), thus we focus on improving the alignment performance without any external information and in a non-iterative way. Inspired by Wang et al. (2018b), which utilize Graph Convolutional Network (GCN) (Kipf and Welling, 2017) to encode the entire KGs, we aim at reconciling the heterogeneity between KGs through completion and pruning, and learn alignment-oriented KG embeddings by modeling structural features from different perspectives via Multi-channel GNNs. 7 Conclusions In this paper, we propose a novel Multi-channel Graph Neural Network model, MuGNN, which learns alignment-oriented KG embeddings for entity alignment. It is able to alleviate the negative impacts caused by the structural heterogeneity and limited seed alignments. Through two channels, MuGNN not only explicitly completes the KGs, but also pruning exclusive entities by using different relation weighting schemes: KG selfattention and cross-KG attention, showing robust graph encoding capability. Extensive experiments on five publicly available datasets and further analysis demonstrate the effectiveness of our method. In future, we are interested in introducing text information of entities for alignment by considering word ambiguity (Cao et al., 2017b); and meanwhile, through cross-KG entity proximity (Cao et al., 2015). Acknowledgments NExT++ research is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its IRC@SG Funding Initiative. 1461 References Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In NIPS. Yixin Cao, Lei Hou, Juanzi Li, and Zhiyuan Liu. 2018a. Neural collective entity linking. In Proceedings of the 27th International Conference on Computational Linguistics, pages 675–686. Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Chengjiang Li, Xu Chen, and Tiansi Dong. 2018b. Joint representation learning of cross-lingual words and entities via attentive distant supervision. In EMNLP. Yixin Cao, Lifu Huang, Heng Ji, Xu Chen, and Juanzi Li. 2017a. Bridge text and knowledge by learning multi-prototype entity mention embedding. In ACL. Yixin Cao, Juanzi Li, Xiaofei Guo, Shuanhu Bai, Heng Ji, and Jie Tang. 2015. Name list only? target entity disambiguation in short texts. In EMNLP. Yixin Cao, Jiaxin Shi, Juanzi Li, Zhiyuan Liu, and Chengjiang Li. 2017b. On modeling sense relatedness in multi-prototype word embedding. In IJCNLP. Yixin Cao, Xiang Wang, Xiangnan He, Tat-Seng Chua, et al. 2019. Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences. arXiv preprint arXiv:1902.06236. Muhao Chen, Yingtao Tian, Kai-Wei Chang, Steven Skiena, and Carlo Zaniolo. 2018. Co-training embeddings of knowledge graphs and entity descriptions for cross-lingual entity alignment. In IJCAI. Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In IJCAI. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research. Luis Gal´arraga, Christina Teflioudi, Katja Hose, and Fabian M Suchanek. 2015. Fast rule mining in ontological knowledge bases with amie++. The VLDB Journal—The International Journal on Very Large Data Bases. Shu Guo, Quan Wang, Lihong Wang, Bin Wang, and Li Guo. 2016. Jointly embedding knowledge graphs and logical rules. In ACL. Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In ICLR. Thomas Rebele, Fabian Suchanek, Johannes Hoffart, Joanna Biega, Erdal Kuzey, and Gerhard Weikum. 2016. Yago: A multilingual knowledge base from wikipedia, wordnet, and geonames. In ISWC. Fabian M Suchanek, Serge Abiteboul, and Pierre Senellart. 2011. Paris: Probabilistic alignment of relations, instances, and schema. Proceedings of the VLDB Endowment. Zequn Sun, Wei Hu, and Chengkai Li. 2017. Cross-lingual entity alignment via joint attributepreserving embedding. In ISWC. Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu. 2018. Bootstrapping entity alignment with knowledge graph embedding. In IJCAI. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph attention networks. In ICLR. Xiang Wang, Dingxian Wang, Canran Xu, Xiangnan He, Yixin Cao, and Tat-Seng Chua. 2018a. Explainable reasoning over knowledge graphs for recommendation. arXiv preprint arXiv:1811.04540. Zhichun Wang, Juanzi Li, and Jie Tang. 2013. Boosting cross-lingual knowledge linking via concept annotation. In IJCAI. Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018b. Cross-lingual knowledge graph alignment via graph convolutional networks. In EMNLP. Mengdi Zhang, Tao Huang, Yixin Cao, and Lei Hou. 2015. Target detection and knowledge learning for domain restricted question answering. In NLPCC. Hao Zhu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Iterative entity alignment via joint knowledge embeddings. In IJCAI.
2019
140
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1462–1467 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1462 A Neural Multi-digraph Model for Chinese NER with Gazetteers Ruixue Ding1, Pengjun Xie1, Xiaoyan Zhang2, Wei Lu3, Linlin Li1 and Luo Si1 1Alibaba Group 2Beihang University, China 3Singapore University of Technology and Design {ada.drx,chengchen.xpj,linyan.lll,luo.si}@alibaba-inc.com [email protected], [email protected] Abstract Gazetteers were shown to be useful resources for named entity recognition (NER) (Ratinov and Roth, 2009). Many existing approaches to incorporating gazetteers into machine learning based NER systems rely on manually defined selection strategies or handcrafted templates, which may not always lead to optimal effectiveness, especially when multiple gazetteers are involved. This is especially the case for the task of Chinese NER, where the words are not naturally tokenized, leading to additional ambiguities. To automatically learn how to incorporate multiple gazetteers into an NER system, we propose a novel approach based on graph neural networks with a multidigraph structure that captures the information that the gazetteers offer. Experiments on various datasets show that our model is effective in incorporating rich gazetteer information while resolving ambiguities, outperforming previous approaches. 1 Introduction Previous work (Ratinov and Roth, 2009) shows that NER is a knowledge intensive task. Background knowledge is often incorporated into an NER system in the form of named entity (NE) gazetteers (Seyler et al., 2018). Each gazetteer is typically a list containing NEs of the same type. Many earlier research efforts show that an NER model can benefit from the use of gazetteers (Li et al., 2005). On the one hand, the use of NE gazetteers alleviates the need of manually labeling the data and can handle rare and unseen cases (Wang et al., 2018). On the other hand, resources of gazetteers are abundant. Many gazetteers have been manually created by previous studies (Zamin and Oxley, 2011). Besides, gazetteers can also be easily constructed from knowledge bases (e.g., Freebase (Bollacker et al., 2008)) or com䑳 ┩ 㐃 ⵌ ☓ ☭ 宑 ⪫ ㎢ Open Three At North Capital Human People Public Park Zhang San PER2 People’s Park Beijing LOC1 LOC2 PER2 Zhang Sanzai PER1 Beijing citizen Wrong matches Correct matches The actual translation: Zhang San is at the Beijing People’s Park LOC1 Figure 1: Example of Entity Matching mercial data sources (e.g., product catalogues of e-commence websites). While such background knowledge can be helpful, in practice the gazetteers may also contain irrelevant and even erroneous information which harms the system’s performance (Chiu and Nichols, 2016). This is especially the case for Chinese NER, where enormous errors can be introduced due to wrongly matched entities. Chinese language is inherently ambiguous since the granularity of words is less well defined than other languages (such as English). Thus massive wrongly matched entities can be generated with the use of gazetteers. As we can see from the example shown in Figure 1, matching a simple 9-character sentence with 4 gazetteers may result in 6 matched entities, among which 2 are incorrect. To effectively eliminate the errors, we need a way to resolve the conflicting matches. Existing methods often rely on hand-crafted templates or predefined selection strategies. For example, Qi et al. (2019) defined several n-gram templates to construct features for each character based on dictionaries and contexts. These templates are taskspecific and the lengths of the matched entities are constrained by templates. Several selection strategies are proposed, such as maximizing the total number of matched tokens in a sentence (Shang et al., 2018), or maximum matching with rules (Sassano, 2014). Though general, these strategies are unable to effectively utilize the contextual information. For example, as shown in Figure 1, 1463 maximizing the total number of matched tokens in a sentence results in wrongly matched entity 张 三在(Zhang Sanzai) instead of 张三(Zhang San). While such solutions either rely on manual efforts for rules, templates or heuristics, we believe it is possible to take a data-driven approach here to learn how to combine gazetteer knowledge. To this end, we propose a novel multi-digraph structure which can explicitly model the interaction of the characters and the gazetteers. Combined with an adapted Gated Graph Sequence Neural Networks (GGNN) (Li et al., 2016) and a standard bidirectional LSTM-CRF (Lample et al., 2016) (BiLSTM-CRF), our model learns a weighted combination of the information from different gazetteers and resolves matching conflicts based on contextual information. We summarize our contributions as follows: 1) we propose a novel multi-digraph model to learn how to combine the gazetteer information and to resolve conflicting matches in learning with contexts. To the best of our knowledge, we are the first neural approach to NER that models the gazetteer information with a graph structure; 2) experimental results show that our model significantly outperforms previous methods of using gazetteers and the state-of-the-art Chinese NER models; 3) we release a new dataset in the e-commerce domain. Our code and data are publicly available1. 2 Model Architecture The overall architecture of our model is shown in Figure 2. Specifically, our model is comprised of a multi-digraph, an adapted GGNN embedding layer and a BiLSTM-CRF layer. The multidigraph explicitly models the text together with the NE gazetteer information. The information in such a graph representation is then transformed to a feature representation space using an improved GGNN structure. The encoded feature representation is then fed to a standard BiLSTM-CRF to predict the final structured output. Text Graph. As shown in Figure 2, given the input sentence 张三在北京人民公园(Zhang San is at the Beijing People’s Park) consisting of 9 Chinese characters and 4 gazetteers PER1, PER2, LOC1, LOC2 (PER1 and PER2 are gazetteers of the same type PER – “person”, but are from different sources; similarly for LOC1 1https://github.com/PhantomGrapes/ MultiDigraphNER 𝒗𝒆𝑳𝑶𝑪𝟏 𝒗𝒆 𝑷𝑬𝑹𝟐 𝑣,𝑣,. 𝑣,/ 𝑣,0 𝑣,1 𝒗𝒔𝑳𝑶𝑪𝟏 L S T M C R F Raw space Embedding space GNN(𝒗𝒄𝟏) GNN(𝒗𝒄𝟐) GNN(𝒗𝒄𝟑) GNN(𝒗𝒄𝟒) GNN(𝒗𝒄𝟓) 𝑣,7 𝑣,8 𝑣,9 𝑣,: GNN(𝒗𝒄𝟔) GNN(𝒗𝒄𝟕) 𝒗𝒔 𝑷𝑬𝑹𝟐 𝒗𝒔𝑷𝑬𝑹𝟏 𝒗𝒆𝑷𝑬𝑹𝟏 GNN(𝒗𝒄𝟖) GNN(𝒗𝒄𝟗) 䑳┩ ⵌ☓☭宑⪫㎢ 㐃 𝑐@ 𝑐A 𝑐B 𝑐C 𝑐D 𝑐E 𝑐F 𝑐G 𝑐H 𝒗𝒔 𝑳𝑶𝑪𝟐 𝒗𝒆 𝑳𝑶𝑪𝟐 Figure 2: System architecture and LOC2). We construct nodes as follows. We first use 9 nodes to represent the complete sentence, where each Chinese character corresponds to one node. We also use another 4 pairs of nodes (8 in total) for capturing the information from the 4 gazetteers, where each pair corresponds to the start and end of every entity matched by a specific gazetteer. Next we add directed edges between the nodes. First, for each pair of adjacent Chinese characters, we add one directed edge between them – from the left character to the right one. Next, for each matched entity from a gazetteer, edges are added from the entity start node, connecting through the character nodes composing the entity and ending with the entity end node for the corresponding gazetteer. For instance, as we have illustrated in Figure 2, with c1c2, or 张 三(Zhang San) matched by the gazetteer PER2, the following edges are constructed: (vPER2 s , vc1), (vc1, vc2) and (vc2, vPER2 e ), where vPER2 s and vPER2 e are the start and end nodes for the gazetteer PER2, and each edge is associated with a label indicating its type information (PER in this case). When edges of the same label overlap, they are merged into a single edge. Such a simple process leads to a multi-digraph (or “directed multigraph”) representation encoding the character ordering information, the knowledge from multiple NE gazetteers, as well as their interactions. Formally, a multi-digraph is defined as G := (V, E, L), where V is the set of nodes, E is the set of edges, and L is the set of labels. With n Chinese characters in the input sentence and m gazetteers used in the model, the node set V = Vc ∪Vs ∪Ve. Here, Vc is the set of nodes representing characters. Given a gazetteer g, we introduce two special 1464 nodes vg s and vg e to the graph which we use to denote the start and end of an entity matched with g. Vs (Ve) is a set that contains the special nodes such as vg s (vg e). Each edge in E is assigned with a label to indicate the type of the connection between nodes. We have the label set L = {ℓc} ∪{ℓgi}m i=1. The label ℓc is assigned to edges connecting adjacent characters, which are used to model the natural ordering of characters in the text. The label ℓgi is assigned to all edges that are used to indicate the presence of an text span that matches with an entity listed in the gazetteer gi. Adapted GGNN. Given a graph structure, the idea of GGNN is to produce meaningful outputs or to learn node representations through neural networks with gated recurrent units (GRU) (Cho et al., 2014). While other neural architectures for graphs exist, we believe that GGNN is more suitable for the Chinese NER task for its better capability of capturing the local textual information compared to other GNNs such as GCN (Kipf and Welling, 2017). However, the traditional GGNN (Li et al., 2016) is unable to distinguish edges with different labels. We adapt GGNN so as to learn a weighted combination of the gazetteer information suitable for our task. To cope with our multi-digraph structure, we first extend the adjacency matrix A to include edges of different labels. Next, we define a set of trainable contribution coefficients αc, αg1, . . . , αgm for each type of edges. These coefficients are used to define the amount of contribution from each type of structural information (the gazetteers and the character sequence) for our task. In our model, an adapted GGNN architecture is utilized to learn the node representations. The initial state h(0) v of a node v is defined as follows: h(0) v =  W g(v) v ∈Vs ∪Ve [W c(v)⊤, W bi(v)⊤]⊤ v ∈Vc (1) where W c and W g are lookup tables for the character or the gazetteer the node represents. In the case of character nodes, a bigram embedding table W bi is used since it has been shown to be useful for the NER task (Chen et al., 2015). The structural information of the graph is stored in the adjacency matrix A which serves to retrieve the states of neighboring nodes at each step. To adapt to the multi-digraph structure, A is extended to include edges of different labels, A = [A1, ..., A|L|]. The contribution coefficients are transformed into weights of edges in A: [wc, wg1, . . . , wgm] = σ([αc, αg1, . . . , αgm]) (2) Edges of the same label share the same weight. Next, the hidden states are updated by GRU. The basic recurrence for this propagation network is: H = [h(t−1) 1 , . . . , h(t−1) |V | ]⊤ (3) a(t) v = [(HW1)⊤, . . . , (HW|L|)⊤]A⊤ v + b (4) z(t) v = σ(W za(t) v + U zh(t−1) v ) (5) r(t) v = σ(W ra(t) v + U rh(t−1) v ) (6) ˆh(t) v = tanh(Wa(t) v + U(r(t) v ⊙h(t−1) v )) (7) h(t) v = (1 −z(t) v ) ⊙h(t−1) v + z(t) v ⊙ˆh(t) v (8) where h(t) v is the hidden state for node v at time step t, and Av is the row vector corresponding to node v in the adjacency matrix A. W and U are parameters to be learned. Equation 3 creates the state matrix H at time step (t −1). Equation 4 shows the information to be propagated through adjacent nodes. Equations 5, 6, 7, and 8 combine the information from adjacent nodes and the current hidden state of the nodes to compute the new hidden state at time step t. After T steps, we have our final state h(T) v for the node v. BiLSTM-CRF. The learned feature representations of characters {h(T) v | v ∈Vc} are then fed to a standard BiLSTM-CRF following the character order in the original sentence, to produce the output sequence. 3 Experiments 3.1 Experimental Setup Dataset. The three public datasets used in our experiments are OntoNotes 4.0 (Weischedel et al., 2010), MSRA (Levow, 2006), and Weibo-NER (Peng and Dredze, 2016). OntoNotes and MSRA are two datasets consisting of newswire text. Weibo-NER is in the domain of social media. We use the same split as Che et al. (2013) and Peng and Dredze (2016) on OntoNotes and on WeiboNER. To demonstrate the effectiveness of our model in the e-commerce domain, we further constructed a new dataset by crawling and manually annotating the NEs of two types, namely PROD (“products”) and BRAN (“brands”). We name our dataset as “E-commerce-NER”. The NER task in the e-commerce domain is more challenging. The NEs of interest are usually the names of products 1465 Models OntoNotes MSRA P R F P R F BiLSTM-CRF 72.0 75.1 73.5 92.3 92.4 92.4 (+ N-gram) 71.1 75.5 73.3 92.7 92.7 92.7 (+ PIET) 71.6 74.6 73.1 92.9 93.4 93.1 (+ PDET) 73.8 73.8 73.8 93.1 93.1 93.1 Our model (w/o gazetteers) 74.8 73.0 73.9 93.2 92.7 92.9 Our model 75.4 76.6 76.0 94.6 94.2 94.4 Zhang and Yang (2018) 76.4 71.6 73.9 93.6 92.8 93.2 Dong et al. (2016) 91.3 90.6 91.0 Zhang et al. (2006) 90.2 90.2 91.2 Table 1: Results on the newswire data Models Weibo-NER E-commerce-NER P R F P R F BiLSTM-CRF 60.8 52.9 56.6 71.1 76.1 73.6 (+ N-gram) 57.8 53.6 55.6 71.2 75.9 73.5 (+ PIET) 57.7 54.4 56.0 71.7 75.8 73.7 (+ PDET) 59.2 54.4 56.7 72.6 75.1 73.8 Our model (w/o gazetteers) 62.1 52.7 57.0 70.7 74.6 72.6 Our model 63.1 56.3 59.5 74.3 76.2 75.2 Zhang and Yang (2018) 58.8 Peng and Dredze (2016) 59.0 Table 2: Results on social media/e-commerce domains or brands. In practice, the number of unique entities that can appear in such a domain can be easily tens of millions. The training data is typically far from being enough to cover even a small portion of all such NEs. Thus, the effectiveness of an NER system in the e-commerce domain relies heavily on domain-specific gazetteers. Gazetteers. For the three public datasets, we collect gazetteers of 4 categories (PER, GPE, ORG, LOC). Each category has 3 gazetteers with different sizes, selected from multiple sources including “Sougou”2, “HanLP”3 and “Hankcs”4. We add an extra indomain gazetteer of type PER for WeiboNER dataset since the online community has a rich set of nicknames and aliases. For our dataset in the e-commerce domain, we collect 3 product name gazetteers and 4 brand name gazetteers crawled from product catalogues from the e-commerce site Taobao5. To better demonstrate the problem of conflicting matches with gazetteers added as knowledge source, the entity conflict rate of each dataset with respect to the gazetteers it references is analyzed. The entity conflict rate (ECR) is defined as the ratio of non-identical overlapping entity matches to all unique entities matched with all gazetteers. The ECR of OntoNotes, MSRA, Weibo-NER and E-commerce-NER are respec2A crowdsourced gazetteer used by the Chinese IME Sougou: https://pinyin.sogou.com/dict/. 3A gazetteer from a widely used open-source Chinese NLP toolkit: https://github.com/hankcs/HanLP. 4A gazetteer which consists of over ten million entries: http://www.hankcs.com/nlp/corpus. 5http://www.taobao.com tively 39.70%, 44.75%, 36.10% and 46.05%. Models for Comparison. We use BiLSTMCRF (Lample et al., 2016) with character+bigram embedding without using any gazetteer as the comparison baseline6. We explore the three different methods of adding gazetteer features that we compare against: N-gram features, PositionIndependent Entity Type (PIET) features and Position-Dependent Entity Type (PDET) features. These feature construction processes follow the work of Wang et al. (2018). We refer the readers to their paper for further details. To show the effect of adding gazetteer information, a trivial version of our model without using any gazetteer information is also implemented as one of our baselines (our model w/o gazetteers). 3.2 Results From Table 1, it can be seen that our model with 12 general gazetteers of 4 entity types has an overall highest performance in the news domain. By adding domain specific gazetteers, our model is capable of improving the NER quality in both the social media and the e-commerce domains, as shown in Table 2. Previous methods of using gazetteers do improve the performance of the BiLSTM-CRF model, but the performance gains are not significant. We can observe the performance on both OntoNotes and Weibo-NER drop, when the N-gram and the PIET features were used on top of the BiLSTM-CRF model. We believe this is due to the erroneous information the model captured, especially when multiple conflicting gazetteers were used together. Compared to these methods, our model achieves a remarkably higher performance. Our model is not only able to improve recall by using the gazetteer knowledge, but is also able to offer an improved precision. To understand the effect of using gazetteers by different methods, we conducted some detailed experiments on OntoNotes. We first split all the sentences in the test set into 3 groups, based on if the entities also appear in the training data or not: “All” contains those sentences in which all entities can be found in the training set, “Some” contains sentences which contain some of the entities from the training set but not all, “None” contains sentences where none of the entities appear in the training set. For the last set of sentences, we con6We implemented the baseline models using the NCRFPP toolkit (Yang and Zhang, 2018). 1466 Entities PDET Our model Our model Appear In (w/o gazetteers) Train Gaze P R F P R F P R F All 84.6 85.3 85.0 85.3 88.8 87.0 87.4 88.1 87.7 Some 78.2 73.2 75.7 79.5 76.0 77.7 78.0 72.0 74.9 None 66.7 62.9 64.7 68.5 65.0 66.7 66.5 59.2 62.6 None All 69.8 64.8 67.2 74.2 67.0 72.0 71.4 59.9 65.1 None Some 66.7 61.0 63.7 66.1 61.8 63.9 64.0 56.7 60.1 None None 63.6 62.7 63.1 64.8 62.9 63.8 64.2 60.9 62.5 Table 3: Detailed results on OntoNotes (Train: Training data, Gaze: Gazetteers). ducted additional experiments by further splitting them into three sub-groups, based on whether their entities appear in the gazetteers. We compare three models under each setting: 1) PDET, 2) our model and 3) our model with all gazetteer nodes removed. We note that the last model can be regarded as a trivial version of both PDET and our model. As shown in Table 3, when none of the entities in a test sentence has been seen during training, with increasing gazetteer coverage our model has a more significant improvement compared to PDET. When none or some of the test entities appear in the training data, both PDET and our model perform better than the trivial model. This shows the benefit of utilizing gazetteer knowledge. Furthermore, in this case, our model still yields a relatively better F1 score, due to its better way of representing gazetteer information using multi-digraph. In the case where all the entities appear during training, both PDET and our model yield lower performance than the trivial model. We believe this is due to errors introduced by the gazetteers. Nonetheless, our model is more robust than PDET in this case. Ablation Study. We also conducted an ablation study to explore the contributions brought by the weighted combination of gazetteers, so as to understand how our model can effectively use the gazetteer information. As shown in Table 4, by fixing the gazetteer contribution coefficients to 1, the model’s performance drops by 1.8 points in terms of F1 score. The precision is even lower than that of our model without gazetteers. This experiment shows that, without a good combination of the gazetteer information, the model fails to resolve conflicting matches. In that case, errors are introduced with the use of gazetteers. These errors harm the model’s performance and have a negative effect on the precision. We use the following ablation test to understand whether the gazetteer information can be fully utilized by our model. There are three types of inforModels P R F Our model 75.4 76.6 76.0 (fixed coefficients) 73.8 74.5 74.2 (AI1G) 73.4 76.7 75.0 (1T1G) 78.9 73.0 75.8 (w/o gazetteers) 74.8 73.0 73.9 Table 4: Ablation study on OntoNotes mation provided by gazetteers: boundary information, entity-type information, and source information. The All in One Gazetteer (AI1G) experiment shows what role the boundary information plays in our model by merging all 12 gazetteers into one lexicon where entity type information is discarded. It outperforms the model without gazetteers by 1.1 points in terms of F1 score. The One Type One Gazetteer (1T1G) model adds the entity type information on top of the AI1G model by adding only the entity type labels (i.e., there is one gazetteer for one type, by merging all gazetteers of the same type into one). Doing so leads to a 0.8 points improvement over the AI1G model. From the experiments we can see that the entities’ source information is also helpful. For example, an entity that appears in multiple PER gazetteers is more likely to be an entity of type PER than an entity appearing only in one gazetteer. Our model can effectively capture such source information and has an improvement of 0.2 points in terms of F1 compared to the 1T1G model. 4 Conclusion and Future Work We present a novel neural multi-digraph model for performing Chinese named entity recognition with gazetteers. Based on the proposed multi-digraph structure, we show that our model is better at resolving entity-matching conflicts. Through extensive experiments, we have demonstrated that our approach outperforms the state-of-the-art models and previous methods for incorporating gazetteers into a Chinese NER system. The ablation study confirms that a suitable combination of gazetteers is essential and our model is able to make good use of the gazetteer information. Although we specifically investigated the NER task for Chinese in this work, we believe the proposed model can be extended and applied to other languages, for which we leave as future work. Acknowledgments We would like to thank the anonymous reviewers for their thoughtful comments. Wei Lu is supported by SUTD project PIE-SGP-AI-2018-01. 1467 References Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proc. of SIGMOD. Wanxiang Che, Mengqiu Wang, Christopher D Manning, and Ting Liu. 2013. Named entity recognition with bilingual constraints. In Proc. of NAACL-HLT. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015. Long short-term memory neural networks for chinese word segmentation. In Proc. of EMNLP. Jason P C Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. In Proc. of TACL. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proc. of EMNLP. Chuanhai Dong, Jiajun Zhang, Chengqing Zong, Masanori Hattori, and Hui Di. 2016. Characterbased lstm-crf with radical-level features for chinese named entity recognition. In Proc. of ICCPOL. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In Proc. of ICLR. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural Architectures for Named Entity Recognition. In Proc. of NAACL-HLT. Gina-Anne Levow. 2006. The third international chinese language processing bakeoff: Word segmentation and named entity recognition. In Proc. of the Fifth SIGHAN Workshop on Chinese Language Processing. Yaoyong Li, Kalina Bontcheva, and Hamish Cunningham. 2005. Svm based learning system for information extraction. Deterministic and statistical methods in machine learning. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. 2016. Gated graph sequence neural networks. In Proc. of ICLR. Nanyun Peng and Mark Dredze. 2016. Improving named entity recognition for chinese social media with word segmentation representation learning. In Proc. of ACL. Zhang Qi, Liu Xiaoyu, and Fu Jinlan. 2019. Neural Networks Incorporating Dictionaries for Chinese Word Segmentation. In Proc. of AAAI. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proc. of CoNLL. Manabu Sassano. 2014. Deterministic Word Segmentation Using Maximum Matching with Fully Lexicalized Rules. In Proc. of EACL. Dominic Seyler, Tatiana Dembelova, Luciano Del Corro, Johannes Hoffart, and Gerhard Weikum. 2018. A study of the importance of external knowledge in the named entity recognition task. In Proc. of ACL. Jingbo Shang, Liyuan Liu, Xiang Ren, Xiaotao Gu, Teng Ren, and Jiawei Han. 2018. Learning named entity tagger using domain-specific dictionary. In Proc. of EMNLP. Qi Wang, Yuhang Xia, Yangming Zhou, Tong Ruan, Daqi Gao, and Ping He. 2018. Incorporating dictionaries into deep neural networks for the chinese clinical named entity recognition. arXiv preprint. Ralph Weischedel, Martha Palmer, and Mitchell P Marcus. 2010. Ontonotes release 4.0. LDC2011T03, Philadelphia, Penn.: Linguistic Data Consortium. Jie Yang and Yue Zhang. 2018. Ncrf++: An opensource neural sequence labeling toolkit. In Proc. of ACL (System Demonstrations). Norshuhani Zamin and Alan Oxley. 2011. Building a corpus-derived gazetteer for named entity recognition. In Software Engineering and Computer Systems. Suxiang Zhang, Ying Qin, Juan Wen, and Xiaojie Wang. 2006. Word Segmentation and Named Entity Recognition for SIGHAN Bakeoff3. In Proc. of the Fifth SIGHAN Workshop on Chinese Language Processing. Yue Zhang and Jie Yang. 2018. Chinese NER Using Lattice LSTM. In Proc. of ACL.
2019
141
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1468–1476 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1468 Improved Language Modeling by Decoding the Past Siddhartha Brahma IBM Research, Almaden, USA [email protected] Abstract Highly regularized LSTMs achieve impressive results on several benchmark datasets in language modeling. We propose a new regularization method based on decoding the last token in the context using the predicted distribution of the next token. This biases the model towards retaining more contextual information, in turn improving its ability to predict the next token. With negligible overhead in the number of parameters and training time, our Past Decode Regularization (PDR) method improves perplexity on the Penn Treebank dataset by up to 1.8 points and by up to 2.3 points on the WikiText-2 dataset, over strong regularized baselines using a single softmax. With a mixture-of-softmax model, we show gains of up to 1.0 perplexity points on these datasets. In addition, our method achieves 1.169 bits-per-character on the Penn Treebank Character dataset for character level language modeling. Each of these results constitute improvements over models without PDR in their respective settings. 1 Introduction Language modeling is a fundamental task in natural language processing. Given a sequence of tokens, its joint probability distribution can be modeled using the auto-regressive conditional factorization. This leads to a convenient formulation where a language model has to predict the next token given a sequence of tokens as context. Recurrent neural networks are an effective way to compute distributed representations of the context by sequentially operating on the embeddings of the tokens. These representations can then be used to predict the next token as a probability distribution over a fixed vocabulary using a linear decoder followed by Softmax. Starting from the work of (Mikolov et al., 2010), there has been a long list of works that seek to improve language modeling performance using more sophisticated recurrent neural networks (RNNs) (Zaremba et al., 2014; Zilly et al., 2017; Zoph and Le, 2016; Mujika et al., 2017). However, in more recent work vanilla LSTMs (Hochreiter and Schmidhuber, 1997) with relatively large number of parameters have been shown to achieve state-of-the-art performance on several standard benchmark datasets both in wordlevel and character-level perplexity (Merity et al., 2018b,a; Melis et al., 2018; Yang et al., 2017). A key component in these models is the use of several forms of regularization e.g. variational dropout on the token embeddings (Gal and Ghahramani, 2016), dropout on the hidden-tohidden weights in the LSTM (Wan et al., 2013), norm regularization on the outputs of the LSTM and classical dropout (Srivastava et al., 2014). By carefully tuning the hyperparameters associated with these regularizers combined with optimization algorithms like NT-ASGD (a variant of Averaged SGD), it is possible to achieve very good performance. Each of these regularizations address different parts of the LSTM model and are general techniques that could be applied to any other sequence modeling problem. In this paper, we propose a regularization technique that exploits a symmetry in language models. A unique aspect of language modeling using LSTMs (or any RNN) is that at each time step t, the model takes as input a particular token xt from a vocabulary W and using the hidden state of the LSTM (which encodes the context till xt−1) predicts a probability distribution wt+1 on the next token xt+1 over the same vocabulary as output. Since xt can be mapped to a trivial probability distribution over W, this operation can be interpreted as transforming distributions over W (Inan et al., 2016). Clearly, the output distribution is dependent on and is a function of xt and the context 1469 further in the past and encodes information about it. We ask the following question – How much information is it possible to decode about the input distribution (and hence xt) from the output distribution wt+1? In general, it is impossible to decode xt unambiguously. Even if the language model is perfect and correctly predicts xt+1 with probability 1, there could be many tokens preceding it. However, in this case the number of possibilities for xt will be limited, as dictated by the bigram statistics of the corpus and the language in general. We argue that biasing the language model such that it is possible to decode more information about the past tokens from the predicted next token distribution is beneficial. We incorporate this intuition into a regularization term in the loss function of the language model. The symmetry in the inputs and outputs of the language model at each step lends itself to a simple decoding operation. It can be cast as a (pseudo) language modeling problem in “reverse”, where the future prediction wt+1 acts as the input and the last token xt acts as the target of prediction. The token embedding matrix and weights of the linear decoder of the main language model can be reused in the past decoding operation. We only need a few extra parameters to model the nonlinear transformation performed by the LSTM, which we do by using a simple stateless layer. We compute the cross-entropy loss between the decoded distribution for the past token and xt and add it to the main loss function after suitable weighting. The extra parameters used in the past decoding are discarded during inference time. We call our method Past Decode Regularization or PDR for short. We conduct extensive experiments on four benchmark datasets for word level and character level language modeling by combining PDR with existing LSTM based language models and achieve improved performance on three of them. 2 Past Decode Regularization (PDR) Let X = (x1, x2, · · · , xt, · · · , xT ) be a sequence of tokens. In this paper, we will experiment with both word level and character level language modeling. Therefore, tokens can be either words or characters. The joint probability P(X) factorizes into P(X) = T Y t=1 P(xt|x1, x2, · · · , xt−1) (1) Let ct = (x1, x2, · · · , xt) denote the context available to the language model for xt+1. Let W denote the vocabulary of tokens, each of which is embedded into a vector of dimension d. Let E denote the token embedding matrix of dimension |W|×d and ew denote the embedding of w ∈W. An LSTM computes a distributed representation of ct in the form of its hidden state ht, which we assume has dimension d as well. The probability that the next token is w can then be calculated using a linear decoder followed by a Softmax layer as Pθ(w|ct) = Smax(htET + b)|w = exp(hteT w+bw) P w′∈W exp(hteT w′+bw′) (2) where bw′ is the entry corresponding to w′ in a bias vector b of dimension |W| and |w represents projection onto w. Here we assume that the weights of the decoder are tied with the token embedding matrix E (Inan et al., 2016; Press and Wolf, 2017). To optimize the parameters of the language model θ, the loss function to be minimized during training is set as the cross-entropy between the predicted distribution Pθ(w|ct) and the actual token xt+1. LCE = X t −log(Pθ(xt+1|ct)) (3) Note that Eq.(2), when applied to all w ∈W produces a 1 × |W| vector wt+1, encapsulating the prediction the language model has about the next token xt+1. Since this is dependent on and conditioned on ct, wt+1 clearly encodes information about it; in particular about the last token xt in ct. In turn, it should be possible to infer or decode some limited information about xt from wt+1. We argue that by biasing the model to be more accurate in recalling information about past tokens, we can help it in predicting the next token better. To this end, we define the following decoding operation to compute a probability distribution over wc ∈W as the last token in the context. Pθr(wc|wt+1) = Smax(fθr(wt+1E)ET + b′ θr)|wc (4) Here fθr is a non-linear function that maps vectors in Rd to vectors in Rd and b′ θr is a bias vector of dimension |W|, together comprising the parameters θr. In effect, we are decoding the past – the last token in the context xt. This produces a vector wr t of dimension 1 × |W|. The cross-entropy loss 1470 PTB WT2 PTBC enwik8 Train Valid Test Train Valid Test Train Valid Test Train Valid Test Tokens 888K 70.4K 78.7K 2.05M 213K 241K 5.01M 393k 442k 90M 5M 5M Vocab 10K 33.3K 51 205 Table 1: Statistics of the language modeling benchmark datasets. with respect to the actual last token xt can then be computed as LPDR = X t −log(Pθr(xt|wt+1)) (5) Here PDR stands for Past Decode Regularization. LPDR captures the extent to which the decoded distribution of tokens differs from the actual tokens xt in the context. Note the symmetry between Eqs.(2) and (5). The “input” in the latter case is wt+1 and the “context” is provided by a nonlinear transformation of wt+1E. Different from the former, the context in Eq.(5) does not preserve any state information across time steps as we want to decode only using wt+1. The term wt+1E can be interpreted as a “soft” token embedding lookup, where the token vector wt+1 is a probability distribution instead of a unit vector. We add λPDRLPDR to the loss function in Eq.(3) as a regularization term, where λPDR is a positive weighting coefficient, to construct the following new loss function for the language model. L = LCE + λPDRLPDR (6) Thus equivalently PDR can also be viewed as a method of defining an augmented loss function for language modeling. The choice of λPDR dictates the degree to which we want the language model to incorporate our inductive bias i.e. decodability of the last token in the context. If it is too large, the model will fail to predict the next token, which is its primary task. If it is zero or too small, the model will retain less information about the last token which hampers its predictive performance. In practice, we choose λPDR by a search based on validation set performance. Note that the trainable parameters θr associated with PDR are used only during training to bias the language model and are not used at inference time. This also means that it is important to control the complexity of the nonlinear function fθr so as not to overly bias the training. As a simple choice, we use a single fully connected layer of size d followed by a Tanh nonlinearity as fθr. This introduces few extra parameters and a small increase in training time as compared to a model not using PDR. 3 Experiments We present extensive experimental results to show the efficacy of using PDR for language modeling on four standard benchmark datasets – two each for word level and character level language modeling. For the former, we evaluate our method on the Penn Treebank (PTB) (Mikolov et al., 2010) and the WikiText-2 (WT2) (Merity et al., 2016) datasets. For the latter, we use the Penn Treebank Character (PTBC) (Mikolov et al., 2010) and the Hutter Prize Wikipedia Prize (Hutter, 2018) (also known as Enwik8) datasets. Key statistics for these datasets is presented in Table 1. As mentioned in the introduction, some of the best existing results on these datasets are obtained by using extensive regularization techniques on relatively large LSTMs (Merity et al., 2018b,a; Yang et al., 2017). We apply our regularization technique to these models, the so called AWDLSTM. We consider two versions of the model – one with a single softmax (AWD-LSTM) and one with a mixture-of-softmaxes (AWD-LSTM-MoS). The PDR regularization term is computed according to Eq.(4) and Eq.(5). We call our model AWDLSTM+PDR when using a single softmax and AWD-LSTM-MoS+PDR when using a mixtureof-softmaxes. We largely follow the experimental procedure of the original models and incorporate their dropouts and regularizations in our experiments. For completeness, we briefly mention the set of dropouts and regularizations reused from AWDLSTM in our experiments. They are the following. 1. Embedding dropout – Variational or locked dropout applied to the token embedding matrix. 1471 2. Word dropout – Dropout applied to entire tokens. 3. LSTM layer dropout – Dropout between layers of the LSTM. 4. LSTM weight dropout – Dropout applied to the hidden-to-hidden connections in the LSTM. 5. LSTM output dropout – Dropout applied to the final output of the LSTM. 6. Alpha/beta regularization – Activation and temporal activation regularization applied to the LSTM states. 7. Weight decay – L2 regularization on the parameters of the model. Note that these regularizations are applied to the input, hidden state and output of the LSTM and do not exploit the special structure of language modeling, which PDR does. The relative contribution of these existing regularizations and PDR will be analyzed in Section 6. There are 7 hyperparameters associated with the regularizations used in AWD-LSTM (and one extra with MoS). PDR also has an associated weighting coefficient λPDR. For our experiments, we set λPDR = 0.001 which was determined by a coarse search on the PTB and WT2 validation sets. For the remaining ones, we perform light hyperparameter search in the vicinity of those reported for AWD-LSTM in (Merity et al., 2018b,a) and for AWD-LSTM-MoS in (Yang et al., 2017). 3.1 Model and training for PTB and WikiText-2 For the single softmax model (AWDLSTM+PDR), for both PTB and WT2, we use a 3-layered LSTM with 1150, 1150 and 400 hidden dimensions. The word embedding dimension is set to d = 400. For the mixture-of-softmax model, we use a 3-layer LSTM with dimensions 960, 960 and 620, an embedding dimension of 280 and 15 experts for PTB. For WT2, we use a 3-layer LSTM with dimensions 1150, 1150 and 650, an embedding dimension of d = 300 and 15 experts. Weight tying is used in all the models. For training the models, we follow the same procedure as AWD-LSTM i.e. a combination of SGD and NT-ASGD, followed by finetuning. We adopt the learning rate schedules and batch sizes of (Merity et al., 2018b) and (Yang et al., 2017) in our experiments. 3.2 Model and training for PTBC and Enwik8 For PTBC, we use a 3-layer LSTM with 1000, 1000 and 200 hidden dimensions and a character embedding dimension of d = 200. For Enwik8, we use a LSTM with 1850, 1850 and 400 hidden dimensions and the characters are embedded in d = 400 dimensions. For training, we largely follow the procedure laid out in (Merity et al., 2018a). For each of the datasets, AWDLSTM+PDR has less than 1% more parameters than the corresponding AWD-LSTM model (during training only). The maximum observed time overhead due to the additional computation is less than 3%. 4 Results on Word Level Language Modeling The results for PTB are shown in Table 2. With a single softmax, our method (AWD-LSTM+PDR) achieves a perplexity of 55.6 on the PTB test set, which improves on the model without PDR by an absolute 1.7 points. The advantages of better information retention due to PDR are maintained when combined with a continuous cache pointer (Grave et al., 2016), where our method yields an absolute improvement of 1.2 over AWD-LSTM. Notably, when coupled with dynamic evaluation (Krause et al., 2018), the perplexity is decreased further to 49.3. Note that, for both cache pointer and dynamic evaluation, we coarsely tune the associated hyperparameters on the validation set. Using a mixture-of-softmaxes, our method (AWD-LSTM-MoS+PDR) achieves a test perplexity of 53.8, an improvement of 0.6 points over the model without PDR. The use of dynamic evaluation pushes the perplexity further down to 47.3. Note that our models do not use the recently proposed frequency agnostic word embeddings FRAGE (Gong et al., 2018) and it is possible that adding PDR can lead to similar gains when applied to models using such embeddings. PTB is a restrictive dataset with a vocabulary of 10K words. Achieving good perplexity requires considerable regularization. The fact that PDR can improve upon existing heavily regularized models is empirical evidence of its distinctive nature and its effectiveness in improving language models. 1472 Model #Params Valid Test State-of-the-art Methods (Single Softmax) (Merity et al., 2018b) – AWD-LSTM 24.2M 60.0 57.3 (Merity et al., 2018b) – AWD-LSTM + continuous cache pointer 24.2M 53.9 52.8 (Krause et al., 2018) – AWD-LSTM + dynamic evaluation 24.2M 51.6 51.1 (Gong et al., 2018) – AWD-LSTM + cont. cache pointer w/ FRAGE 24.2M 52.3 51.8 Our Method (Single Softmax) AWD-LSTM+PDR 24.2M 57.9 55.6 (-1.7) AWD-LSTM+PDR + continuous cache pointer 24.2M 52.4 51.6 (-1.2) AWD-LSTM+PDR + dynamic evaluation 24.2M 50.1 49.3 (-1.8) Sate-of-the-art Methods (Mixture-of-Softmax) (Yang et al., 2017) – AWD-LSTM-MoS 22M 56.5 54.4 (Yang et al., 2017) – AWD-LSTM-MoS + dynamic evaluation 22M 48.3 47.7 (Gong et al., 2018) – AWD-LSTM-MoS + dyn. evaluation w/ FRAGE 22M 47.4 46.5 Our Method (Mixture-of-Softmax) AWD-LSTM-MoS+PDR 22M 56.2 53.8 (-0.6) AWD-LSTM-MoS+PDR + dynamic evaluation 22M 48.0 47.3 (-0.4) Table 2: Perplexities on Penn Treebank (PTB) for single and mixture-of-softmaxes models. Values in parentheses show gain over respective models without using PDR. The number of parameters during training is 24.4M. Our models do not use frequency agnostic word embeddings. Table 3 shows the perplexities achieved by our model on WT2. This dataset is considerably more complex than PTB with a vocabulary of more than 33K words. AWD-LSTM+PDR improves over the single softmax model without PDR by a significant 2.3 points, achieving a perplexity of 63.5. Similar gains are observed with the use of cache pointer (2.4 points) and with the use of dynamic evaluation (1.7 points). Using a mixture-ofsoftmaxes, AWD-LSTM-MoS+PDR achieves perplexities of 60.5 and 40.3 (with dynamic evaluation) on the WT2 test set, improving upon the models without PDR by 1.0 and 0.4 points respectively. Here again, the use of PDR in models with FRAGE could lead to further drops in perplexity. 4.1 Performance on Larger Datasets We consider the Gigaword dataset (Chelba et al., 2014) with a truncated vocabulary of about 100K tokens with the highest frequency and apply PDR to a baseline 2-layer LSTM language model with embedding and hidden dimensions set to 1024. We use all the shards from the training set for training and a few shards from the heldout set for validation (heldout-0,10) and test (heldout20,30,40). We tuned the PDR coefficient coarsely in the vicinity of 0.001. While the baseline model achieved a validation (test) perplexity of 44.3 (43.1), on applying PDR, the model achieved a perplexity of 44.0 (42.5). Thus, PDR is relatively less effective on larger datasets, a fact also observed for other regularization techniques on such datasets (Yang et al., 2017). 5 Results on Character Level Language Modeling The results on PTBC are shown in Table 4. Our method achieves a bits-per-character (BPC) performance of 1.169 on the PTBC test set, improving on the model without PDR by 0.006 or 0.5%. It is notable that even with this highly processed dataset and a small vocabulary of only 51 tokens, our method improves on already highly regularized models. Finally, we present results on Enwik8 in Table 5. AWD-LSTM+PDR achieves 1.245 BPC. This is 0.012 or about 1% less than the 1.257 BPC achieved by AWD-LSTM in our experiments (with hyperparameters from (Merity et al., 2018a). 6 Analysis of PDR In this section, we analyze PDR by probing its performance in several ways and comparing it with models that do not use PDR. 6.1 A Valid Regularization To verify that indeed PDR can act as a form of regularization, we perform the following exper1473 Model #Params Valid Test Sate-of-the-art Methods (Single Softmax) (Merity et al., 2018b) – AWD-LSTM 33.6M 68.6 65.8 (Merity et al., 2018b) – AWD-LSTM + continuous cache pointer 33.6M 53.8 52.0 (Krause et al., 2018) – AWD-LSTM + dynamic evaluation 33.6M 46.4 44.3 (Gong et al., 2018) – AWD-LSTM + cont. cache ptr w/ FRAGE 33.6M 51.0 49.3 Our Method (Single Softmax) AWD-LSTM+PDR 33.6M 66.5 63.5 (-2.3) AWD-LSTM+PDR + continuous cache pointer 33.6M 51.5 49.6 (-2.4) AWD-LSTM+PDR + dynamic evaluation 33.6M 44.6 42.6 (-1.7) Sate-of-the-art Methods (Mixture-of-Softmax) (Yang et al., 2017) – AWD-LSTM-MoS 35M 63.9 61.5 (Yang et al., 2017) – AWD-LSTM-MoS + dynamic evaluation 35M 42.4 40.7 (Gong et al., 2018) – AWD-LSTM-MoS + dyn. eval w/ FRAGE 35M 40.9 39.1 Our Method (Mixture-of-Softmax) AWD-LSTM-MoS+PDR 35M 63.0 60.5 (-1.0) AWD-LSTM-MoS+PDR + dynamic evaluation 35M 42.0 40.3 (-0.4) Table 3: Perplexities on WikiText-2 (WT2) for single and mixture-of-softmaxes models. Values in parentheses show gain over respective models without using PDR. The number of parameters during training is 33.8M. We do not use frequency agnostic word embeddings. Model #Prms Test (Krueger et al., 2016) – Zoneout LSTM 1.27 (Chung et al., 2016) – HM-LSTM 1.24 (Ha et al., 2016) – HyperLSTM 14.4M 1.219 (Zoph and Le, 2016) – NAS Cell 16.3M 1.214 (Mujika et al., 2017) – FS-LSTM-4 6.5M 1.193 (Merity et al., 2018a) – AWD-LSTM 13.8M 1.175 Our Method AWD-LSTM+PDR 13.8M 1.169 (-0.006) Table 4: Bits-per-character on the PTBC test set. iment. We take the models for PTB and WT2 and turn off all dropouts and regularization and compare its performance with only PDR turned on. The results, as shown in Table 6, validate the premise of PDR. The model with only PDR turned on achieves 2.4 and 5.1 better validation perplexity on PTB and WT2 as compared to the model without any regularization. Thus, biasing the LSTM by decoding the distribution of past tokens from the predicted next-token distribution can indeed act as a regularizer leading to better generalization performance. Next, in Fig. 1(a) we plot histograms of the negative log-likelihoods of the correct context tokens xt in the past decoded vector wr t computed using Model #Prms Test (Ha et al., 2016) – HyperLSTM 27M 1.340 (Chung et al., 2016) – HM-LSTM 35M 1.32 (Rocki et al., 2016) – SD Zoneout 64M 1.31 (Zilly et al., 2017) – RHN (depth 10) 21M 1.30 (Zilly et al., 2017) – Large RHN 46M 1.270 (Mujika et al., 2017) – FS-LSTM-4 27M 1.277 (Mujika et al., 2017) – Large FS-LSTM-4 47M 1.245 (Merity et al., 2018a) – AWD-LSTM 47M 1.232 Our Method AWD-LSTM (Ours) 47M 1.257 AWD-LSTM+PDR 47M 1.245 Table 5: Bits-per-character on Enwik8 test set. PTB Valid WT2 Valid AWD-LSTM (NoReg) 108.6 142.7 AWD-LSTM (NoReg) + PDR 106.2 137.6 Table 6: Validation perplexities for AWD-LSTM without any regularization and with only PDR. our best models on the PTB and WT2 validation sets. The NLL values are significantly peaked near 0, which means that the past decoding operation is able to decode significant amount of information about the last token in the context. To investigate the effect of hyperparameters on PDR, we pick 60 sets of random hyperparameters 1474 0 2 4 6 8 10 0 0.1 0.2 0.3 Negative log-likelihood Normalized frequency PTB-Valid WT2-Valid (a) Histogram of the NLL of xt in the past decoded vector wr t . 60 60.5 61 61.5 62 0.00 0.10 0.20 0.30 Perplexity Normalized frequency AWD-LSTM+PDR AWD-LSTM (b) Histogram of validation perplexities on PTB for a set of different hyperparameters. Figure 1: Context token NLL for AWD-LSTM+PDR and comparison with AWD-LSTM. 0 2 4 6 8 10 0.00 0.05 0.10 0.15 Predicted token entropy Normalized frequency AWD-LSTM+PDR AWD-LSTM (a) Histogram of entropies of wt+1 for PTB valid. 200 400 600 800 1,000 1,200 30 40 50 60 70 80 No. of epochs Perplexity AWD-LSTM+PDR (Train) AWD-LSTM (Train) AWD-LSTM+PDR (Valid) AWD-LSTM (Valid) (b) Training curves on PTB showing perplexity. The kink in the middle represents the start of finetuning. Figure 2: Comparison between AWD-LSTM+PDR and AWD-LSTM. in the vicinity of those reported by (Merity et al., 2018b) and compute the validation set perplexity after training (without finetuning) on PTB, for both AWD-LSTM+PDR and AWD-LSTM. Their histograms are plotted in Fig.1(b). The perplexities for models with PDR are distributed slightly to the left of those without PDR. There appears to be more instances of perplexities in the higher range for models without PDR. Note that there are certainly hyperparameter settings where adding PDR leads to lower validation complexity, as is generally the case for any regularization method. 6.2 Comparison with AWD-LSTM To show the qualitative difference between AWDLSTM+PDR and AWD-LSTM, in Fig.2(a), we plot a histogram of the entropy of the predicted next token distribution wt+1 for all the tokens in the validation set of PTB achieved by their respective best models. The distributions for the two models is slightly different, with some identifiable patterns. The use of PDR has the effect of reducing the entropy of the predicted distribution when it is in the higher range of 8 and above, pushing it into the range of 5-8. This shows that one way PDR biases the language model is by reducing the entropy of the predicted next token distribution. Indeed, one way to reduce the cross-entropy between xt and wr t is by making wt+1 less spread out in Eq.(5). This tends to benefit the language model when the predictions are correct. We also compare the training curves for the two models in Fig.2(b) on PTB. Although the two models use slightly different hyperparame1475 PTB WT2 Model Valid Test Valid Test AWD-LSTM+PDR 57.9 55.6 66.5 63.5 – finetune 60.4 58.0 68.5 65.6 – LSTM output dropout 67.6 65.4 75.4 72.1 – LSTM layer dropout 68.1 65.8 73.7 70.4 – embedding dropout 63.9 61.4 77.1 73.6 – word dropout 62.9 60.5 70.4 67.4 – LSTM weight dropout 68.4 65.8 79.0 75.5 – alpha/beta regularization 63.0 60.4 74.0 70.7 – weight decay 64.7 61.4 72.5 68.9 – PDR 60.5 57.7 69.5 66.4 Table 7: Ablation experiments on the PTB and WT2 validation and test sets. ters, the regularization effect of PDR is apparent with a lower validation perplexity but higher training perplexity. The corresponding trends shown in Fig.2(a,b) for WT2 have similar characteristics. 6.3 Ablation Studies We perform a set of ablation experiments on the best AWD-LSTM+PDR models for PTB and WT2 to understand the relative contribution of PDR and the other regularizations used in the model. The results are shown in Table 7. In both cases, PDR has a significant effect in decreasing the validation set performance, albeit lesser than the other forms of regularization. This is not surprising as PDR does not act on the LSTM weights directly. 7 Related Work Our proposed Past Decode Regularization method builds on the work of using sophisticated regularization techniques to train LSTMs for language modeling. In particular, the AWD-LSTM model achieves state-of-the-art performance with a single softmax on the four datasets considered in this paper (Merity et al., 2018b,a). (Melis et al., 2018) also achieve similar results with highly regularized LSTMs. By addressing the so-called softmax bottleneck in single softmax models, (Yang et al., 2017) use a mixture-of-softmaxes to achieve significantly lower perplexities. PDR utilizes the symmetry between the inputs and outputs of a language model, a fact that is also exploited in weight tying (Inan et al., 2016; Press and Wolf, 2017). Our method can be used with untied weights as well. Although motivated by language modeling, PDR can also be applied to seq2seq models with shared input-output vocabularies, such as those used for text summarization and neural machine translation (with byte pair encoding of words) (Press and Wolf, 2017). Regularizing the training of an LSTM by combining the main objective function with auxiliary tasks has been successfully applied to several tasks in NLP (Radford et al., 2018; Rei, 2017). In fact, a popular choice for the auxiliary task is language modeling itself. This in turn is related to multi-task learning (Collobert and Weston, 2008). Specialized architectures like Recurrent Highway Networks (Zilly et al., 2017) and NAS (Zoph and Le, 2016) have been successfully used to achieve competitive performance in language modeling. The former makes the hidden-tohidden transition function more complex allowing for more refined information flow. Such architectures are especially important for character level language modeling where strong results have been shown using Fast-Slow RNNs (Mujika et al., 2017), a two level architecture where the slowly changing recurrent network tries to capture more long range dependencies. The use of historical information can greatly help language models deal with long range dependencies as shown by (Merity et al., 2016; Krause et al., 2018; Rae et al., 2018). In a recent paper, (Gong et al., 2018) achieve improved performance for language modeling by using frequency agnostic word embeddings (called FRAGE), a technique that is orthogonal to PDR and can be combined with it. 8 Conclusion We propose a new Past Decode Regularization (PDR) method for language modeling that exploits the input-output symmetry in each step to decode the last token in the context from the predicted next token distribution. We empirically show reductions in perplexity on several benchmark datasets as compared to strong highly reg1476 ularized baseline models. Future work includes exploring the application of PDR to other seq2seq models that have a similar input-output symmetry. Also, it will be worthwhile to ascertain the efficacy of PDR for language models using Transformers and in combination with FRAGE embeddings. References Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, and Phillipp Koehn. 2014. One billion word benchmark for measuring progress in statistical language modeling. In INTERSPEECH. Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. 2016. Hierarchical multiscale recurrent neural networks. CoRR, abs/1609.01704. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In ICML. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In NIPS. ChengYue Gong, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2018. Frage: Frequency-agnostic word representation. CoRR, abs/1809.06858. Edouard Grave, Armand Joulin, and Nicolas Usunier. 2016. Improving neural language models with a continuous cache. CoRR, abs/1612.04426. David Ha, Andrew M. Dai, and Quoc V. Le. 2016. Hypernetworks. CoRR, abs/1609.09106. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. M. Hutter. 2018. The human knowledge compression contest. Hakan Inan, Khashayar Khosravi, and Richard Socher. 2016. Tying word vectors and word classifiers: A loss framework for language modeling. CoRR, abs/1611.01462. Ben Krause, Emmanuel Kahembwe, Iain Murray, and Steve Renals. 2018. Dynamic evaluation of neural sequence models. In ICML. David Krueger, Tegan Maharaj, J´anos Kram´ar, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron C. Courville, and Christopher Joseph Pal. 2016. Zoneout: Regularizing rnns by randomly preserving hidden activations. CoRR, abs/1606.01305. G´abor Melis, Chris Dyer, and Phil Blunsom. 2018. On the state of the art of evaluation in neural language models. In ICLR. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018a. An analysis of neural language modeling at multiple scales. CoRR, abs/1803.08240. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018b. Regularizing and optimizing LSTM language models. In ICLR. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. CoRR, abs/1609.07843. Tomas Mikolov, Martin Karafi´at, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH. Asier Mujika, Florian Meier, and Angelika Steger. 2017. Fast-slow recurrent neural networks. In NIPS. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In EACL. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Jack W. Rae, Chris Dyer, Peter Dayan, and Timothy P. Lillicrap. 2018. Fast parametric learning with activation memorization. In ICML. Marek Rei. 2017. Semi-supervised multitask learning for sequence labeling. In ACL. Kamil Rocki, Tomasz Kornuta, and Tegan Maharaj. 2016. Surprisal-driven zoneout. CoRR, abs/1610.07675. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Li Wan, Matthew D. Zeiler, Sixin Zhang, Yann LeCun, and Rob Fergus. 2013. Regularization of neural networks using dropconnect. In ICML. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W. Cohen. 2017. Breaking the softmax bottleneck: A high-rank rnn language model. CoRR, abs/1711.03953. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. CoRR, abs/1409.2329. Julian G. Zilly, Rupesh Kumar Srivastava, Jan Koutn´ık, and J¨urgen Schmidhuber. 2017. Recurrent highway networks. In ICML. Barret Zoph and Quoc V. Le. 2016. Neural architecture search with reinforcement learning. CoRR, abs/1611.01578.
2019
142
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1477–1482 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1477 Training Hybrid Language Models by Marginalizing over Segmentations Edouard Grave Sainbayar Sukhbaatar Piotr Bojanowski Armand Joulin Facebook AI Research {egrave,sainbar,bojanowski,ajoulin}@fb.com Abstract In this paper, we study the problem of hybrid language modeling, that is using models which can predict both characters and larger units such as character ngrams or words. Using such models, multiple potential segmentations usually exist for a given string, for example one using words and one using characters only. Thus, the probability of a string is the sum of the probabilities of all the possible segmentations. Here, we show how it is possible to marginalize over the segmentations efficiently, in order to compute the true probability of a sequence. We apply our technique on three datasets, comprising seven languages, showing improvements over a strong character level language model. 1 Introduction Statistical language modeling is the problem of estimating a probability distribution over text data (Bahl et al., 1983). Most approaches formulate this problem at the word level, by first segmenting the text using a fixed vocabulary. A limitation of these methods is that they cannot generate new words, or process out of vocabulary words. A popular alternative is to directly model sequences at the character level. These models can potentially generate any sequence, and are thus sometimes referred to as open vocabulary. However, they tend to underperform compared to word level models when trained on the same data. For these reasons, a few works have proposed hybrid models, that work both at the character and word level (or sometimes groups of characters). A first class of hybrid models switch between word and character level representations, depending on whether they predict that the upcoming word is in the vocabulary or not (Kawakami et al., 2017; Mielke and Eisner, 2019). For example, a first model can be trained on tokenized data, where outof-vocabulary words are replaced by the <unk> token. A second model is then used to generate the character sequences corresponding to out-ofvocabulary words. Another approach, which does not require tokenization, is to process groups of characters, which are obtained based on linguistic knowledge or low level statistics. These include merging characters using mutual information (Mikolov et al., 2012) or the byte pair encoding algorithm (Sennrich et al., 2016). This approach first produces a segmentation for the text, and then learns a language model on it. However, some sequences have multiple possible segmentations, and a model considering a single one might underestimate the true probability of the sequence. Thus, it is important to marginalize over the set of segmentations to obtain the true probability of a sequence (van Merri¨enboer et al., 2017; Buckman and Neubig, 2018). In this paper, we propose an alternative approach to address this limitation, and in particular, to train models by marginalizing over the set of segmentations. As the number of possible segmentations grows exponentially with the sequence size, using an efficient algorithm such as dynamic programming is important. Computing the representation of the context at the character level allows to apply dynamic programming to this problem, without using approximations. This technique was previously considered in the context of automatic speech recognition (Wang et al., 2017) or to copy tokens from the input for code generation (Ling et al., 2016). We evaluate our method on three datasets for character level language modeling, showing that adding n-grams to the predictions improve the perplexity of the model. 1478 2 Approach The goal of character level language modeling is to learn a probability distribution over sequences of characters c1, ..., cT . Using the chain rule, such a distribution can be factorized as the product of the probability distribution of a character conditioned on its history: p(c1, ..., cT ) = T Y t=1 p(ct | c0, ..., ct−1), where c0 is a special symbol indicating the beginning of the sequence. In this paper, we learn these conditional probability distributions using neural networks. For each time step t, a neural network builds a representation ht from the history that is used to predict the upcoming character. This representation can be obtained from any architecture, such as feedforward (Bengio et al., 2003) or recurrent networks (Mikolov et al., 2010). We focus on the transformer network, recently introduced by Vaswani et al. (2017), because of its high performance on character level language modeling (Dai et al., 2018). We refer to Vaswani et al. (2017) for the details of this architecture. 2.1 Hybrid language models Hybrid language models predict multiple tokens, instead of one, at each time step. One way to perform this is to add n-grams to the output vocabulary of the model. Under such models, a character sequence has multiple segmentations, and the model estimates its probability by summing the probability of all its segmentations. For example, if the model predicts bigrams in addition to characters, the word dog can be decomposed as [d], [o], [g] or [do], [g] or [d], [og]. Thus, the probability of the sequence of characters dog is given by p(dog) = p(d) × p(o | d) × p(g | do) + p(do) × p(g | do) + p(d) × p(og | d). More formally, let us denote by S(c1:T ) the set of all possible segmentations of a given sequence c1:T = c1, ..., cT . Then, the probability of the character sequence is p(c1:T ) = p(c1, ..., cT ) = X s∈S(c) p(s). (1) The set of all possible segmentations grows exponentially with the sequence size, making it impractical to evaluate this probability by directly summing over all segmentations. 2.2 Factorization of the segmentation probabilities A segmentation s can be decomposed into a sequence s1, ..., sK of consecutive atoms in the vocabulary on which we apply the chain rule to get: p(s) = K Y k=1 p(sk | s0, ..., sk−1). Using this factorization of the probability distribution, it is not possible to directly apply dynamic programming to compute the probability of a sequence. The reason is that the conditional distribution of symbols depends on the segmentation, preventing to reuse computation across different segmentations. For example, previous work proposed to use the segmentation both in the input and output of the model. The hidden representations ht of the neural network were thus intrinsically linked to the segmentation, preventing to share computations. A potential workaround is to merge the different representations corresponding to all the segmentations ending at the same character, for example by avergaging them (van Merri¨enboer et al., 2017; Buckman and Neubig, 2018). In our case, we use n-grams only in the output, making the representations ht independent of the segmentations, and the application of dynamic programming straightforward. To do so, we define the conditional distribution using characters, instead of the segmentation. Given a sequence s1, ..., sK of n-grams, we introduce the concatenation operator concat, such that concat(s1, ..., sK) = c1, ..., cJ corresponds to the sequence of J characters that compose the segmentation sequence. For example, the two segmentations [do], [g], [s] and [d], [og], [s] of the word dogs share the same output from the concat operator: concat([do], [g], [s]) = d, o, g, s, concat([d], [og], [s]) = d, o, g, s. We now define the conditional distribution as p(sk | s1:k−1) = p(sk | concat(s1:k−1)). (2) 1479 This reformulation is exact under the conditional independence assumption, i.e., that the symbol at position t in the character sequence is independent of the segmentation, given the characters up to time t−1. In the next section, we show how, under this assumption, the probability of a sequence can be computed with dynamic programming. 2.3 Dynamic programming For this section, we restrict ourselves to predicting characters and bigrams for simplicity. However, our approach is straightforward to apply to n-grams or words. Given a sequence of character c = c1, ..., cT , all segmentations end with either the character cT or the bigram cT−1cT . More precisely, we can decompose the probability of c as: p(c1:T ) = X s∈S(c1:T −1) p(cT | s)p(s) + X s∈S(c1:T −2) p(cT−1cT | s)p(s). Using the reformulation of the conditional probability of Eq. (2) under the conditional independence assumption on segmentations, we get p(c1:T ) = X s∈S(c1:T −1) p(cT | c1:T−1)p(s) + X s∈S(c1:T −2) p(cT−1cT | c1:T−2)p(s). We now move the conditional probabilities out of the sums: p(c1:T ) = p(cT | c1:T−1) X s∈S(c1:T −1) p(s) + p(cT−1cT | c1:T−2) X s∈S(c1:T −2) p(s). Finally, using Eq. (1), we obtain a recurrence relation over the sequence probabilities: p(c1:T ) = p(cT | c1:T−1)p(c1:T−1) + p(cT−1cT | c1:T−2)p(c1:T−2). We can thus optimize over all the possible segmentations using dynamic programing. 2.4 Conditional distribution of symbols In this section, we briefly describe how to model the conditional probability distribution of symbols, either characters or ngrams, given the character history. We learn a character level neural network to encode the context with hidden representation ht for each character t. The probability distribution of the next symbol, either a character or a n-gram, is obtained by taking the softmax over the full vocabulary, which includes both characters and longer elements: p(· | c0, ..., ct−1) = softmax(Wht). Note that we get only one probability distribution over n-grams of different lengths. 2.5 Training procedure We learn the parameters of our model by minimizing the negative log-likelihood of the training data, using the probability introduced in Eq. (1). We rely on automatic differentiation to compute the gradients, and thus, only need to implement the forward computation, which relies on dynamic programming. Empirically, we observed that training a model from scratch with this objective is sometimes unstable. We thus consider an alternative training objective, used at the beginning of training. For each position, this loss is equal to the sum of the negative log-probabilities of the n-grams corresponding to that position. More formally, given a sequence of length T, this objective is equal to − T X t=1 N−1 X n=1 log (p(ct:t+n | c1:t−1)) , and N is the size of the longest n-grams considered (we can pad n-grams when t + n > T or exclude them from this loss). 3 Experiments In this section, we describe the experiments that we performed to evaluate our approach on character level language modeling. 3.1 Datasets We consider 3 datasets derived from Wikipedia articles, but with different preprocessing. Text8. The text8 dataset of M. Mahoney1 contains 100 million characters from Wikipedia, and was preprocessed to only contains the lowercase letters a-z and nonconsecutive spaces. 1http://mattmahoney.net/dc/textdata 1480 Model Cs De En Es Fi Fr Ru Avg. HCLM (Kawakami et al., 2017) 2.035 1.641 1.622 1.555 1.796 1.508 1.810 1.710 HCLM cache (Kawakami et al., 2017) 1.984 1.588 1.538 1.498 1.711 1.467 1.761 1.649 Full (Mielke and Eisner, 2019) 2.240 1.618 1.506 1.469 1.896 1.434 1.969 1.733 Full (tok) (Mielke and Eisner, 2019) 1.928 1.465 1.387 1.363 1.751 1.319 1.709 1.560 BPE (Mielke and Eisner, 2019) 1.897 1.455 1.439 1.403 1.685 1.365 1.643 1.555 BPE (tok) (Mielke and Eisner, 2019) 1.856 1.414 1.386 1.362 1.652 1.317 1.598 1.512 Transformer baseline 1.777 1.406 1.393 1.37 1.525 1.34 1.616 1.489 Our approach 1.715 1.352 1.341 1.326 1.445 1.299 1.508 1.426 Table 1: Test set bpc on the MWC dataset. The hyperparameters for our method are chosen on the validation set of WikiText2. Note that Mielke and Eisner (2019) applied the BPE baseline and their method to both tokenized and non-tokenized data. All the other methods were applied on non-tokenized data only. Model Test BN LSTM (Cooijmans et al., 2016) 1.36 HM LSTM (Chung et al., 2016) 1.29 RHN (Zilly et al., 2017) 1.27 Large mLSTM (Krause et al., 2016) 1.27 12L Transf. (Al-Rfou et al., 2018) 1.18 Transformer baseline 1.176 Our approach 1.156 Table 2: Test set bpc on the text8 dataset. WikiText2. The WikiText2 dataset was introduced by Merity et al. (2017) with a different preprocessing from text8: numbers, capital letters and special characters are kept. The vocabulary size is 1152.2 We use the raw version of the dataset, which is tokenized but where rare words are not replaced by the <unk> token. The training data contains 10.9 millions characters. MWC. The multilingual Wikipedia corpus (MWC) of Kawakami et al. (2017) is very similar in size and preprocessing as WikiText2, but contains documents in 7 languages: Czech (cs), German (de), English (en), Spanish (es), Finnish (fi), French (fr) and Russian (ru). Unlike Wikitext2, the MWC dataset is not tokenized. The training sets range from 6.1M characters for Czech to 15.6M characters for English, and we refer the reader to Kawakami et al. (2017) for detailed statistics on this corpus.3 2As opposed to previous work, we keep all characters that appears in the train, validation or test splits of the data. 3Again, we keep all characters that appears in the data. Model Test HCLM (Kawakami et al., 2017) 1.670 HCLM cache (Kawakami et al., 2017) 1.500 BPE (Mielke and Eisner, 2019) 1.468 Full (Mielke and Eisner, 2019) 1.455 Transformer baseline 1.417 Our approach 1.366 Table 3: Test set bpc on the WikiText2 dataset. 3.2 Technical details Following recent work on character language modeling with transformers, we use a model with 12 layers of dimension 512, and 4 attention heads. We use a feedforward block of dimension 2048 for MWC and WikiText2, and 3072 for text8. We set the attention length to 512, and the batch size to 8. We use Adagrad (Duchi et al., 2011) to learn the parameters of our models. Following Vaswani et al. (2017), we start with a learning rate of 0, increase it linearly for k timesteps, then keep it constant, before halving it at every epochs for the last 10 epochs. We use a learning rate of 0.04 and warmup of 16k steps for the text8 dataset, and a learning rate of 0.025 and warmup of 8k steps for the WikiText2 and MWC datasets. In order to have an efficient model at inference time, we use the caching mechanism from Dai et al. (2018) to store the hidden representations of the previous batch, as well as relative position weights. We pick a dropout rate in the set {0.1, 0.2, 0.3, 0.4, 0.5}, using the validation set. In the experiments, we use n-grams of size up to 4, excluding n-grams appearing less than 200 times (1000 times on text8) to limit the 1481 size of the vocabulary. Thus, segmentations which contain out-of-vocabulary n-grams have a probability equal to zero. 3.3 Results In Table 1, we report results on the MWC dataset, comparing our approach to the models of Kawakami et al. (2017) and Mielke and Eisner (2019). Our approach significantly improves the state of the art on this dataset. Some of the gain is due to the change of architecture for a transformer. However, we observe that marginalizing over segmentations also improves over the character level transformer baseline, showing the benefits of our method. Finally, as opposed to Mielke and Eisner (2019), our approach does not need to tokenize the data to perform well on this dataset. In Table 2 and Table 3, we report results on the text8 and wikitext2 datasets respectively. As for the MWC dataset, our approach significantly improves the perplexity compared to our character level transformer baseline. Note that the state of the art on text8 is 1.08 bpc on the test set with a 24-layer transformer network (Dai et al., 2018). This model is significantly larger than ours, containing almost 8 times more parameters. 4 Conclusion In this paper, we study the problem of hybrid language modeling, where models can predict ngrams, instead of unigrams only. A technical challenge for learning these models is that a given string can have multiple segmentations, and one needs to marginalize over the set of segmentations. We introduce a simple technique to do so, allowing to apply dynamic programming for learning and inference. Using this approach, we improve the state of the art on the MWC and WikiText2 datasets, used to evaluate hybrid language models. Acknowledgements We thank the anonymous reviewers for their helpful comments. References Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. 2018. Character-level language modeling with deeper self-attention. arXiv preprint arXiv:1808.04444. Lalit R Bahl, Frederick Jelinek, and Robert L Mercer. 1983. A maximum likelihood approach to continuous speech recognition. IEEE Transactions on Pattern Analysis & Machine Intelligence, (2):179–190. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. JMLR. Jacob Buckman and Graham Neubig. 2018. Neural lattice language models. TACL. Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. 2016. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704. Tim Cooijmans, Nicolas Ballas, C´esar Laurent, C¸ a˘glar G¨ulc¸ehre, and Aaron Courville. 2016. Recurrent batch normalization. arXiv preprint arXiv:1603.09025. Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2018. Transformer-xl: Language modeling with longer-term dependency. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. JMLR. Kazuya Kawakami, Chris Dyer, and Phil Blunsom. 2017. Learning to create and reuse words in openvocabulary neural language modeling. In ACL. Ben Krause, Liang Lu, Iain Murray, and Steve Renals. 2016. Multiplicative lstm for sequence modelling. arXiv preprint arXiv:1609.07959. Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk`y, Andrew Senior, Fumin Wang, and Phil Blunsom. 2016. Latent predictor networks for code generation. arXiv preprint arXiv:1603.06744. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In ICLR. Bart van Merri¨enboer, Amartya Sanyal, Hugo Larochelle, and Yoshua Bengio. 2017. Multiscale sequence modeling with a learned dictionary. arXiv preprint arXiv:1707.00762. Sebastian J Mielke and Jason Eisner. 2019. Spell once, summon anywhere: A two-level open-vocabulary language model. In AAAI. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association. Tom´aˇs Mikolov, Ilya Sutskever, Anoop Deoras, HaiSon Le, Stefan Kombrink, and Jan Cernocky. 2012. Subword language modeling with neural networks. preprint (http://www. fit. vutbr. cz/imikolov/rnnlm/char. pdf), 8. 1482 Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Chong Wang, Yining Wang, Po-Sen Huang, Abdelrahman Mohamed, Dengyong Zhou, and Li Deng. 2017. Sequence modeling via segmentations. In ICML. Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutn´ık, and J¨urgen Schmidhuber. 2017. Recurrent highway networks. In ICML.
2019
143
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1483–1493 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1483 Improving Neural Language Models by Segmenting, Attending, and Predicting the Future Hongyin Luo1 Lan Jiang2 Yonatan Belinkov1 James Glass1 1MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA 02139, USA {hyluo, belinkov, glass}@mit.edu 2School of Information Sciences, University of Illinois at Urbana–Champaign Champaign, IL 61820, USA [email protected] Abstract Common language models typically predict the next word given the context. In this work, we propose a method that improves language modeling by learning to align the given context and the following phrase. The model does not require any linguistic annotation of phrase segmentation. Instead, we define syntactic heights and phrase segmentation rules, enabling the model to automatically induce phrases, recognize their task-specific heads, and generate phrase embeddings in an unsupervised learning manner. Our method can easily be applied to language models with different network architectures since an independent module is used for phrase induction and context-phrase alignment, and no change is required in the underlying language modeling network. Experiments have shown that our model outperformed several strong baseline models on different data sets. We achieved a new state-of-the-art performance of 17.4 perplexity on the Wikitext-103 dataset. Additionally, visualizing the outputs of the phrase induction module showed that our model is able to learn approximate phrase-level structural knowledge without any annotation. 1 Introduction Neural language models are typically trained by predicting the next word given a past context (Bengio et al., 2003). However, natural sentences are not constructed as simple linear word sequences, as they usually contain complex syntactic information. For example, a subsequence of words can constitute a phrase, and two non-neighboring words can depend on each other. These properties make natural sentences more complex than simple linear sequences. Most recent work on neural language modeling learns a model by encoding contexts and matching the context embeddings to the embedding of the next word (Bengio et al., 2003; Merity et al., 2017; Melis et al., 2017). In this line of work, a given context is encoded with a neural network, for example a long short-term memory (LSTM; Hochreiter and Schmidhuber, 1997) network, and is represented with a distributed vector. The loglikelihood of predicting a word is computed by calculating the inner product between the word embedding and the context embedding. Although most models do not explicitly consider syntax, they still achieve state-of-the-art performance on different corpora. Efforts have also been made to utilize structural information to learn better language models. For instance, parsing-readingpredict networks (PRPN; Shen et al., 2017) explicitly learn a constituent parsing structure of a sentence and predict the next word considering the internal structure of the given context with an attention mechanism. Experiments have shown that the model is able to capture some syntactic information. Similar to word representation learning models that learns to match word-to-word relation matrices (Mikolov et al., 2013; Pennington et al., 2014), standard language models are trained to factorize context-to-word relation matrices (Yang et al., 2017). In such work, the context comprises all previous words observed by a model for predicting the next word. However, we believe that contextto-word relation matrices are not sufficient for describing how natural sentences are constructed. We argue that natural sentences are generated at a higher level before being decoded to words. Hence a language model should be able to predict the following sequence of words given a context. In this work, we propose a model that factorizes a context-to-phrase mutual information matrix to learn better language models. The contextto-phrase mutual information matrix describes the relation among contexts and the probabilities of 1484 phrases following given contexts. We make the following contributions in this paper: • We propose a phrase prediction model that improves the performance of state-of-the-art word-level language models. • Our model learns to predict approximate phrases and headwords without any annotation. 2 Related Work Neural networks have been widely applied in natural language modeling and generation (Bengio et al., 2003; Bahdanau et al., 2014) for both encoding and decoding. Among different neural architectures, the most popular models are recurrent neural networks (RNNs; Mikolov et al., 2010), long short-term memory networks (LSTMs; Hochreiter and Schmidhuber, 1997), and convolutional neural networks (CNNs; Bai et al., 2018; Dauphin et al., 2017). Many modifications of network structures have been made based on these architectures. LSTMs with self-attention can improve the performance of language modeling (Tran et al., 2016; Cheng et al., 2016). As an extension of simple self-attention, transformers (Vaswani et al., 2017) apply multihead self-attention and have achieved competitive performance compared with recurrent neural language models. A current state-of-the-art model, Transformer-XL (Dai et al., 2018), applied both a recurrent architecture and a multi-head attention mechanism. To improve the quality of input word embeddings, character-level information is also considered (Kim et al., 2016). It has also been shown that context encoders can learn syntactic information (Shen et al., 2017). However, instead of introducing architectural changes, for example a self-attention mechanism or character-level information, previous studies have shown that careful hyper-parameter tuning and regularization techniques on standard LSTM language models can obtain significant improvements (Melis et al., 2017; Merity et al., 2017). Similarly, applying more careful dropout strategies can also improve the language models (Gal and Ghahramani, 2016; Melis et al., 2018). LSTM language models can be improved with these approaches because LSTMs suffer from serious over-fitting problems. Recently, researchers have also attempted to improve language models at the decoding phase. Inan et al. (2016) showed that reusing the input word embeddings in the decoder can reduce the perplexity of language models. Yang et al. (2017) showed the low-rank issue in factorizing the context-to-word mutual information matrix and proposed a multi-head softmax decoder to solve the problem. Instead of predicting the next word by using only similarities between contexts and words, the neural cache model (Grave et al., 2016) can significantly improve language modeling by considering the global word distributions conditioned on the same contexts in other parts of the corpus. To learn the grammar and syntax in natural languages, Dyer et al. (2016) proposed the recurrent neural network grammar (RNNG) that models language incorporating a transition parsing model. Syntax annotations are required in this model. To utilize the constituent structures in language modeling without syntax annotation, parse-readpredict networks (PRPNs; Shen et al., 2017) calculate syntactic distances among words and computes self-attentions. Syntactic distances have been proved effective in constituent parsing tasks (Shen et al., 2018a). In this work, we learn phrase segmentation with a model based on this method and our model does not require syntax annotation. 3 Syntactic Height and Phrase Induction In this work, we propose a language model that not only predicts the next word of a given context, but also attempts to match the embedding of the next phrase. The first step of this approach is conducting phrase induction based on syntactic heights. In this section, we explain the definition of syntactic height in our approach and describe the basics ideas about whether a word can be included in an induced phrase. Intuitively, the syntactic height of a word aims to capture its distance to the root node in a dependency tree. In Figure 1, the syntactic heights are represented by the red bars. A word has high syntactic height if it has low distance to the root node. A similar idea, named syntactic distance, is proposed by Shen et al. (2017) for constructing constituent parsing trees. We apply the method for calculating syntactic distance to calculate syntactic height. Given a sequence of embeddings of input words [x1, x2, · · · , xn], we calculate their syntactic heights with a temporal convolutional net1485 work (TCN) (Bai et al., 2018). di = Wd · [xi−n, xi−n+1, · · · , xi]T + bd (1) hi = Wh · ReLU(di) + bh (2) where hi stands for the syntactic height of word xi. The syntactic height hi for each word is a scalar, and Wh is a 1 × D matrix, where D is the dimensionality of di. These heights are learned and not imposed by external syntactic supervision. In Shen et al. (2017), the syntactic heights are used to generate context embeddings. In our work, we use the syntactic heights to predict induced phrases and calculate their embeddings. We define the phrase induced by a word based on the syntactic heights. Consider two words xi and xk. xk belongs to the phrase induced by xi if and only if for any j ∈(i, k), hj < max(hi, hk). For example, in Figure 1, the phrase induced by the red marked word the is “the morning flights”, since the syntactic height of the word morning, hmorning < hflights. However, the word “to” does not belong to the phrase because hflights is higher than both hthe and hto. The induced phrase and the inducing dependency connection are labeled in blue in the figure. Note that this definition of an induced phrase does not necessarily correspond to a phrase in the syntactic constituency sense. For instance, the words “to Houston” would be included in the phrase “the morning flights to Houston” in a traditional syntactic tree. Given the definition of induced phrases, we propose phrase segmenting conditions (PSCs) to find the last word of an induced phrase. Considering the induced phrase of the i-th word, si = [xi, xi+1, · · · , xj]. If xj is not the last word of a given sentence, there are two conditions that xj should satisfy: 1. (PSC-1) The syntactic height of xj must be higher than the height of xi, that is hj −hi > 0 (3) 2. (PSC-2) The syntactic height of xj+1 should be lower that xj. hj −hj+1 > 0 (4) Given the PSCs, we can decide the induced phrases for the sentence shown in Figure 1. The last word of the phrase induced by “United” is United canceled the morning flights to Houston root Figure 1: Groundtruth dependency tree and syntactic heights of each word. “canceled”, and the last word of the phrase induced by “flights” is “Houston”. For the word assigned the highest syntactic height, its induced phrase is all remaining words in the sentence. 4 Model In this work, we formulate multi-layer neural language models as a two-part framework. For example, in a two-layer LSTM language model (Merity et al., 2017), we use the first layer as phrase generator and the last layer as a word generator: [c1, c2, · · · , cT ] = RNN1([x1, x2, · · · , xT ]) (5) [y1, y2, · · · , yT ] = RNN2([c1, c2, · · · , cT ]) (6) For a L-layer network, we can regard the first L1 layers as the phrase generator and the next L2 = L −L1 layers as the word generator. Note that we use yi to represent the hidden state output by the second layer instead of hi, since hi in our work is defined as the syntactic height of xi. In the traditional setting, the first layer does not explicitly learn the semantics of the following phrase because there is no extra objective function for phrase learning. In this work, we force the first layer to output context embeddings ci for phrase prediction with three steps. Firstly, we predict the induced phrase for each word according to the PSCs proposed in Section 3. Secondly, we calculate the embedding of each phrase with a head-finding attention. Lastly, we align the context embedding and phrase embedding with negative sampling. The word generation is trained in the same way as standard language models. The diagram of the model is shown in Figure 2. The three steps are described next. 1486 United canceled the morning flights to Houston Phrase Generator Step 1. Syntactic height and phrase induction morning flights Step2. Phrase embedding with headword attention Word Generator Context-phrase alignment next-word embedding: morning Context-word alignment Objective Function Step 3. Phrase and word prediction Phrase Embedding: morning flights Figure 2: The 3-step diagram of our approach. The current target word is “the”, the induced phrase is “morning flights”, and the next word is “morning”. The context-phrase and context-word alignments are jointly trained. 4.1 Phrase Segmentation We calculate the syntactic height and predict the induced phrase for each word: hi = TCN([xi−n, xi−n+1, · · · , xi]) (7) where TCN(·) stands for the TCN model described in Equations (1) and (2), and n is the width of the convolution window. Based on the proposed phrase segmenting conditions (PSCs) described in the previous section, we predict the probability of a word being the first word outside a induced phrase. Firstly, we decide if each word, xj−1, j ∈(i + 1, n], satisfies the two phrase segmenting conditions, PSC-1 and PSC-2. The probability that xj satisfies PSC-1 is p1 psc(xj) = 1 2 · (fHT (hj −hi) + 1) (8) Similarly, the probability that xj satisfies PSC-2 is p2 psc(xj) = 1 2 · (fHT (hj −hj+1) + 1) (9) where fHT stands for the HardTanh function with a temperature a: fHT (x) =      −1 x ≤−1 a a · x −1 a < x ≤1 a 1 x > 1 a This approach is inspired by the context attention method proposed in the PRPN model (Shen et al., 2017). Then we can infer the probability of whether a word belongs to the induced phrase of xi with pind(xj) = jY k=1 ˆp(xk) (10) where pind(xi) stands for the probability that xi belongs to the induced phrase, and ˆp(xk)=  1 k ≤i + 1 1 −p1 psc(xk−1) · p2 psc(xk−1) k > i + 1 Note that the factorization in Equation 10 assumes that words are independently likely to be included in the induced phrase of xi. 4.2 Phrase Embedding with Attention Given induced phrases, we can calculate their embeddings based on syntactic heights. To calculate the embedding of phrase s = [x1, x2, · · · , xn], we calculate an attention distribution over the phrase: αi = hi · pind(xi) + c P j hj · pind(xj) + c (11) where hi stands for the syntactic height for word xi and c is a constant real number for smoothing the attention distribution. Then we generate the phrase embedding with a linear transformation: s = W · X i αi · ei (12) where ei is the word embedding of xi. In training, we apply a dropout layer on s. 4.3 Phrase and Word Prediction A traditional language model learns the probability of a sequence of words: p(x1, x2, · · · , xn) = p(x1) · Y i p(xi+1|xi 1) (13) where xi 1 stands for x1, x2, · · · , xi, which is the context used for predicting the next word, xi+1. In most related studies, the probability p(xi+1|xi 1) 1487 is calculated with the output of the top layer of a neural network yi and the word representations ei+1 learned by the decoding layer: p(xi+1) = Softmax(eT i+1 · yi) (14) The state-of-the-art neural language models contain multiple layers. The outputs of different hidden layers capture different level of semantics of the context. In this work, we force one of the hidden layers to align its output with the embeddings of induced phrases si. We apply an embedding model similar to Mikolov et al. (2013) to train the hidden output and phrase embedding alignment. We define the context-phrase alignment model as follows. We first define the probability that a phrase phi can be induced by context [x1, . . . , xi]. p(phi|xi 1) = σ(cT i · si) (15) where σ(x) = 1 1+e−x , and ci stands for the context embedding of x1, x2, · · · , xi output by a hidden layer, defined in Equation 5. si is the generated embedding of an induced phrase. The probability that a phrase phi cannot be induced by context [x1, . . . , xi] is 1 −p(phi|xi 1). This approach follows the method for learning word embeddings proposed in Mikolov et al. (2013). We use an extra objective function and the negative sampling strategy to align context representations and the embeddings of induced phrases. Given the context embedding ci, the induced phrase embedding si, and random sampled negative phrase embeddings sneg i , we train the neural network to maximize the likelihood of true induced phrases and minimize the likelihood of negative samples. we define the following objective function for context i: lCPA i = 1 −σ(cT i · si) + 1 n n X j=1 σ(cT i · sneg j ) (16) where n stands for the number of negative samples. With this loss function, the model learns to maximize the similarity between the context and true induced phrase embeddings, and minimize the similarity between the context and negative samples randomly selected from the induced phrases of other words. In practice, this loss function is used as a regularization term with a coefficient γ: l = lLM + γ · lCPA (17) It worth noting that our approach is modelagnostic and and can be applied to various architectures. The TCN network for calculating the syntactic heights and phrase inducing is an independent module. In context-phrase alignment training with negative sampling, the objective function provides phrase-aware gradients and does not change the word-by-word generation process of the language model. 5 Experiments We evaluate our model with word-level language modeling tasks on Penn Treebank (PTB; Mikolov et al., 2010), Wikitext-2 (WT2; Bradbury et al., 2016), and Wikitext-103 (WT103; Merity et al., 2016) corpora. The PTB dataset has a vocabulary size of 10,000 unique words. The entire corpus includes roughly 40,000 sentences in the training set, and more than 3,000 sentences in both valid and test set. The WT2 data is about two times larger the the PTB dataset. The dataset consists of Wikipedia articles. The corpus includes 30,000 unique words in its vocabulary and is not cleaned as heavily as the PTB corpus. The WT103 corpus contains a larger vocabulary and more articles than WT2. It consists of 28k articles and more than 100M words in the training set. WT2 and WT103 corpora can evaluate the ability of capturing long-term dependencies (Dai et al., 2018). In each corpus, we apply our approach to publicly-available, state-of-the-art models. This demonstrates that our approach can improve different existing architectures. Our trained models will be published for downloading. The implementation of our models is publicly available.1 5.1 Penn Treebank We train a 3-layer AWD-LSTM language model (Merity et al., 2017) on PTB data set. We use 1,150 as the number of hidden neurons and 400 as the size of word embeddings. We also apply the word embedding tying strategy (Inan et al., 2016). We apply variational dropout for hidden states (Gal and Ghahramani, 2016) and the dropout rate is 0.25. We also apply weight dropout (Merity et al., 2017) and set weight dropout rate as 0.5. We apply stochastic gradient descent (SGD) and averaged SGD (ASGD; Polyak and Juditsky, 1992) 1https://github.com/luohongyin/PILM 1488 Model #Params Dev PPL Test PPL Inan et al. (2016) – Tied Variational LSTM 24M 75.7 73.2 Zilly et al. (2017) – Recurrent Highway Networks 23M 67.9 65.7 Shen et al. (2017) – PRPN 62.0 Pham et al. (2018) – Efficient NAS 24M 60.8 58.6 Melis et al. (2017) – 4-layer skip LSTM (tied) 24M 60.9 58.3 Shen et al. (2018b) – ON-LSTM 25M 58.3 56.2 Liu et al. (2018) – Differentiable NAS 23M 58.3 56.1 Merity et al. (2017) – AWD-LSTM 24M 60.7 58.8 Merity et al. (2017) – AWD-LSTM + finetuning 24M 60.0 57.3 Ours – AWD-LSTM + Phrase Induction - NS 24M 61.0 58.6 Ours – AWD-LSTM + Phrase Induction - Attention 24M 60.2 58.0 Ours – AWD-LSTM + Phrase Induction 24M 59.6 57.5 Ours – AWD-LSTM + Phrase Induction + finetuning 24M 57.8 55.7 Dai et al. (2018) – Transformer-XL 24M 56.7 54.5 Yang et al. (2017) – AWD-LSTM-MoS + finetuning 22M 56.5 54.4 Table 1: Experimental results on Penn Treebank dataset. Compared with the AWD-LSTM baseline models, our method reduced the perplexity on test set by 1.6. Model #Params Dev PPL Test PPL Inan et al. (2016) – Variational LSTM (tied) 28M 92.3 87.7 Inan et al. (2016) – VLSTM + augmented loss 28M 91.5 87.0 Grave et al. (2016) – LSTM 99.3 Grave et al. (2016) – LSTM + Neural cache 68.9 Melis et al. (2017) – 1-Layer LSTM 24M 69.3 69.9 Melis et al. (2017) – 2-Layer Skip Conn. LSTM 24M 69.1 65.9 Merity et al. (2017) – AWD-LSTM + finetuning 33M 68.6 65.8 Ours – AWD-LSTM + Phrase Induction 33M 68.4 65.2 Ours – AWD-LSTM + Phrase Induction + finetuning 33M 66.9 64.1 Table 2: Experimental results on Wikitext-2 dataset. for training. The learning rate is 30 and we clip the gradients with a norm of 0.25. For the phrase induction model, we randomly sample 1 negative sample for each context, and the context-phrase alignment loss is given a coefficient of 0.5. The output of the second layer of the neural network is used for learning context-phrase alignment, and the final layer is used for word generation. We compare the word-level perplexity of our model with other state-of-the-art models and our baseline is AWD-LSTM (Merity et al., 2017). The experimental results are shown in Table 1. Although not as good as the Transformer-XL model (Dai et al., 2018) and the mixture of softmax model (Yang et al., 2017), our model significantly improved the AWD-LSTM, reducing 2.2 points of perplexity on the validation set and 1.6 points of perplexity on the test set. Note that the “finetuning” process stands for further training the language models with ASGD algorithm (Merity et al., 2017). We also did an ablation study without either headword attention or negative sampling (NS). The results are listed in Table 1. By simply averaging word vectors in the induced phrase Without the attention mechanism, the model performs worse than the full model by 0.5 perplexity, but is still better than our baseline, the AWD-LSTM model. In the experiment without negative sampling, we only use the embedding of true induced 1489 Model #Params Test PPL Grave et al. (2016) – LSTM 48.7 Bai et al. (2018) – TCN 45.2 Dauphin et al. (2017) – GCNN-8 44.9 Grave et al. (2016) – LSTM + Neural cache 40.8 Dauphin et al. (2017) – GCNN-14 37.2 Merity et al. (2018) – 4-layer QRNN 151M 33.0 Rae et al. (2018) – LSTM + Hebbian + Cache 29.9 Dai et al. (2018) – Transformer-XL Standard 151M 24.0 Baevski and Auli (2018) – Adaptive input 247M 20.5 Dai et al. (2018) – Transformer-XL Large 257M 18.3 Ours – Transformer-XL Large + Phrase Induction 257M 17.4 Table 3: Experimental results on Wikitext-103 dataset. phrases to align with the context embedding. It is also indicated that the negative sampling strategy can improve the performance by 1.1 perplexity. Hence we just test the full model in the following experiments. 5.2 Wikitext-2 We also trained a 3-layer AWD-LSTM language model on the WT2 dataset. The network has the same input size, output size, and hidden size as the model we applied on PTB dataset, following the experiments done by Merity et al. (2017). Some hyper-parameters are different from the PTB language model. We use a batch size of 60. The embedding dropout rate is 0.65 and the dropout rate of hidden outputs is set to 0.2. Other hyperparameters are the same as we set in training on the PTB dataset. The experimental results are shown in Table 2. Our model improves the AWD-LSTM model by reducing 1.7 points of perplexity on both the validation and test sets, while we did not make any change to the architecture of the AWD-LSTM language model. 5.3 Wikitext-103 The current state-of-the-art language model trained on Wikitext-103 dataset is the Transformer-XL (Dai et al., 2018). We apply our method on the state-of-the-art Transformer-XL Large model, which has 18 layers and 257M parameters. The input size and hidden size are 1024. 16 attention heads are used. We regard the first 14 layers as the phrase generator and the last 4 layers as the word generator. In other words, the context-phrase alignment is trained with the outputs of the 14th layer. The model is trained on 4 Titan X Pascal GPUs, each of which has 12G memory. Because of the limitation of computational resources, we use our approach to fine-tune the officially released pretrained Transformer-XL Large model for 1 epoch. The experimental results are shown in Table 3. Our approach got 17.4 perplexity with the officially released evaluation scripts, significantly outperforming all baselines and achieving new stateof-the-art performance2. 6 Discussion In this section, we show what is learned by training language models with the context-phrase alignment objective function by visualizing the syntactic heights output by the TCN model and the phrases induced by each target word in a sentence. We also visualize the headword attentions over the induced phrase. The first example is the sentence showed in Figure 1. The sentence came from Jurafsky and Martin (2014) and did not appear in our training set. Figure 1 shows the syntactic heights and the induced phrase of “the” according to the groundtruth dependency information. Our model is not given such high-quality inputs in either training or evaluation. Figure 3 visualizes the structure learned by our phrase induction model. The inferred syntactic heights are shown in Figure 3a. Heights assigned 2We did not show Dev PPLs in Table 3 since only the correct approach to reproduce the test PPL was provided with the pretrained Transformer-XL model. 1490 United canceled the morning flights to Houston Predictions Groundtruth (a) Syntactic heights of each word. united canceled the morning flights to houston united canceled the morning flights to 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.12 0.32 0.34 0.03 0.19 0.0 0.0 0.0 0.48 0.52 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.12 0.88 0.0 0.0 0.0 0.0 0.0 0.0 1.0 (b) Induced phrases and headword attentions. Figure 3: Examples of induced phrases and corresponding headword attention for generating the phrase embedding. The word of each row stands for the target word as the current input of the language model, and the values in each row in the matrices stands for the words consisting the induced phrase and their weights. wedidn't even geta chancetodo the programswe wantedtodo we did n't even get a chance to do the programs we wanted to (a) several fund managers expect a rough market this morning before prices stabilize several fund managers expect a rough market this morning before prices (b) (c) (d) but a majority of the <unk> council did n't buy those arguments but a majority of the <unk> council did n't buy those (e) at least they both speak with strong <unk> as do <unk> and <unk> at least they both speak with strong <unk> as do <unk> and (f) Figure 4: Examples of phrase inducing and headword attentions. 1491 to words “the” and “to” are significantly lower than others, while the verb “canceled” is assigned the highest in the sentence. Induced phrases are shown in Figure 3b. The words at the beginning of each row stand for the target word of each step. Values in the matrix stand for attention weights for calculating phrase embedding. The weights are calculated with the phrase segmenting conditions (PSC) and the syntactic heights described in Equations 8 to 11. For the target word “united”, hunited < hcanceled and hcanceled > hthe, hence the induced phrase of “united” is a single word “canceled”, and the headword attention of “canceled” is 1, which is indicated in the first row of Figure 3b. The phrase induced by “canceled” is the entire following sequence, “the morning flights to houston”, since no following word has a higher syntactic height than the target word. It is also shown that the headword of the induced phrase of “canceled” is “flights”, which agrees with the dependency structure indicated in Figure 1. More examples are shown in Figure 4. Figures 4a to 4d show random examples without any unknown word, while the examples shown in Figures 4e and 4f are randomly selected from sentences with unknown words, which are marked with the UNK symbol. The examples show that the phrase induction model does not always predict the exact structure represented by the dependency tree. For example, in Figure 4b, the TCN model assigned the highest syntactic height to the word “market” and induced the phrase “expect a rough market” for the context “the fund managers”. However, in a ground-truth dependency tree, the verb “expect” is the word directly connected to the root node and therefore has the highest syntactic height. Although not exactly matching linguistic dependency structures, the phrase-level structure predictions are reasonable. The segmentation is interpretable and the predicted headwords are appropriate. In Figure 4c, the headwords are “trying”, “quality”, and “involvement”. The model is also robust with unknown words. In Figure 4e, “the <unk> council” is segmented as the induced phrase of “but a majority of”. In this case, the model recognized that the unknown word is dependent on “council”. The sentence in Figure 4f includes even more unknown words. However, the model still correctly predicted the root word, the verb “speak”. For the target word “with”, the induced phrase is “strong <unk>”. Two unknown words are located in the last few words of the sentence. The model failed to induce the phrase “<unk> and <unk>” for the word “do”, but still successfully split “<unk>” and “and”. Meanwhile, the attentions over the phrases induced by “speak”, “do”, and the first “<unk>” are not quite informative, suggesting that unknown words made some difficulties for headword prediction in this example. However, the unknown words are assigned significantly higher syntactic heights than the word “and”. 7 Conclusion In this work, we improved state-of-the-art language models by aligning context and induced phrases. We defined syntactic heights and phrase segmentation rules. The model generates phrase embeddings with headword attentions. We improved the AWD-LSTM and Transformer-XL language models on different data sets and achieved state-of-the-art performance on the Wikitext-103 corpus. Experiments showed that our model successfully learned approximate phrase-level knowledge, including segmentation and headwords, without any annotation. In future work, we aim to capture better structural information and possible connections to unsupervised grammar induction. References Alexei Baevski and Michael Auli. 2018. Adaptive input representations for neural language modeling. arXiv preprint arXiv:1809.10853. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Shaojie Bai, J Zico Kolter, and Vladlen Koltun. 2018. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155. James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2016. Quasi-recurrent neural networks. arXiv preprint arXiv:1611.01576. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733. 1492 Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2018. Transformer-xl: Language modeling with longer-term dependency. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 933–941. JMLR. org. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A Smith. 2016. Recurrent neural network grammars. arXiv preprint arXiv:1602.07776. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In Advances in neural information processing systems, pages 1019–1027. Edouard Grave, Armand Joulin, and Nicolas Usunier. 2016. Improving neural language models with a continuous cache. arXiv preprint arXiv:1612.04426. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Hakan Inan, Khashayar Khosravi, and Richard Socher. 2016. Tying word vectors and word classifiers: A loss framework for language modeling. arXiv preprint arXiv:1611.01462. Dan Jurafsky and James H Martin. 2014. Speech and language processing, volume 3. Pearson London. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In Thirtieth AAAI Conference on Artificial Intelligence. Hanxiao Liu, Karen Simonyan, and Yiming Yang. 2018. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055. G´abor Melis, Charles Blundell, Tom´aˇs Koˇcisk`y, Karl Moritz Hermann, Chris Dyer, and Phil Blunsom. 2018. Pushing the bounds of dropout. arXiv preprint arXiv:1805.09208. G´abor Melis, Chris Dyer, and Phil Blunsom. 2017. On the state of the art of evaluation in neural language models. arXiv preprint arXiv:1707.05589. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and optimizing lstm language models. arXiv preprint arXiv:1708.02182. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. An analysis of neural language modeling at multiple scales. arXiv preprint arXiv:1803.08240. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. 2018. Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268. Boris T Polyak and Anatoli B Juditsky. 1992. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838–855. Jack W Rae, Chris Dyer, Peter Dayan, and Timothy P Lillicrap. 2018. Fast parametric learning with activation memorization. arXiv preprint arXiv:1803.10049. Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. 2017. Neural language modeling by jointly learning syntax and lexicon. arXiv preprint arXiv:1711.02013. Yikang Shen, Zhouhan Lin, Athul Paul Jacob, Alessandro Sordoni, Aaron Courville, and Yoshua Bengio. 2018a. Straight to the tree: Constituency parsing with neural syntactic distance. arXiv preprint arXiv:1806.04168. Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2018b. Ordered neurons: Integrating tree structures into recurrent neural networks. arXiv preprint arXiv:1810.09536. Ke Tran, Arianna Bisazza, and Christof Monz. 2016. Recurrent memory networks for language modeling. arXiv preprint arXiv:1601.01272. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. 2017. Breaking the softmax bottleneck: A high-rank rnn language model. arXiv preprint arXiv:1711.03953. 1493 Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutn´ık, and J¨urgen Schmidhuber. 2017. Recurrent highway networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 4189–4198. JMLR. org.
2019
144
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1494–1503 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1494 Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks 1Yi Tay, 2Aston Zhang, 3Luu Anh Tuan, 4Jinfeng Rao∗, 5Shuai Zhang 6Shuohang Wang, 7Jie Fu, 8Siu Cheung Hui 1,8Nanyang Technological University, 2Amazon AI, 3MIT CSAIL 4Facebook AI, 5UNSW, 6Singapore Management University, 7Mila and Polytechnique Montr´eal [email protected] Abstract Many state-of-the-art neural models for NLP are heavily parameterized and thus memory inefficient. This paper proposes a series of lightweight and memory efficient neural architectures for a potpourri of natural language processing (NLP) tasks. To this end, our models exploit computation using Quaternion algebra and hypercomplex spaces, enabling not only expressive inter-component interactions but also significantly (75%) reduced parameter size due to lesser degrees of freedom in the Hamilton product. We propose Quaternion variants of models, giving rise to new architectures such as the Quaternion attention Model and Quaternion Transformer. Extensive experiments on a battery of NLP tasks demonstrates the utility of proposed Quaternion-inspired models, enabling up to 75% reduction in parameter size without significant loss in performance. 1 Introduction Neural network architectures such as Transformers (Vaswani et al., 2017; Dehghani et al., 2018) and attention networks (Parikh et al., 2016; Seo et al., 2016; Bahdanau et al., 2014) are dominant solutions in natural language processing (NLP) research today. Many of these architectures are primarily concerned with learning useful feature representations from data in which providing a strong architectural inductive bias is known to be extremely helpful for obtaining stellar results. Unfortunately, many of these models are known to be heavily parameterized, with state-of-the-art models easily containing millions or billions of parameters (Vaswani et al., 2017; Radford et al., 2018; Devlin et al., 2018; Radford et al., 2019). This renders practical deployment challenging. As such, the enabling of efficient and lightweight ∗Work done while at University of Maryland. adaptations of these models, without significantly degrading performance, would certainly have a positive impact on many real world applications. To this end, this paper explores a new way to improve/maintain the performance of these neural architectures while substantially reducing the parameter cost (compression of up to 75%). In order to achieve this, we move beyond real space, exploring computation in Quaternion space (i.e., hypercomplex numbers) as an inductive bias. Hypercomplex numbers comprise of a real and three imaginary components (e.g., i, j, k) in which interdependencies between these components are encoded naturally during training via the Hamilton product ⊗. Hamilton products have fewer degrees of freedom, enabling up to four times compression of model size. Technical details are deferred to subsequent sections. While Quaternion connectionist architectures have been considered in various deep learning application areas such as speech recognition (Parcollet et al., 2018b), kinematics/human motion (Pavllo et al., 2018) and computer vision (Gaudet and Maida, 2017), our work is the first hypercomplex inductive bias designed for a wide spread of NLP tasks. Other fields have motivated the usage of Quaternions primarily due to their natural 3 or 4 dimensional input features (e.g., RGB scenes or 3D human poses) (Parcollet et al., 2018b; Pavllo et al., 2018). In a similar vein, we can similarly motivate this by considering the multi-sense nature of natural language (Li and Jurafsky, 2015; Neelakantan et al., 2015; Huang et al., 2012). In this case, having multiple embeddings or components per token is well-aligned with this motivation. Latent interactions between components may also enjoy additional benefits, especially pertaining to applications which require learning pairwise affinity scores (Parikh et al., 2016; Seo 1495 et al., 2016). Intuitively, instead of regular (real) dot products, Hamilton products ⊗extensively learn representations by matching across multiple (inter-latent) components in hypercomplex space. Alternatively, the effectiveness of multi-view and multi-headed (Vaswani et al., 2017) approaches may also explain the suitability of Quaternion spaces in NLP models. The added advantage to multi-headed approaches is that Quaternion spaces explicitly encodes latent interactions between these components or heads via the Hamilton product which intuitively increases the expressiveness of the model. Conversely, multi-headed embeddings are generally independently produced. To this end, we propose two Quaternioninspired neural architectures, namely, the Quaternion attention model and the Quaternion Transformer. In this paper, we devise and formulate a new attention (and self-attention) mechanism in Quaternion space using Hamilton products. Transformation layers are aptly replaced with Quaternion feed-forward networks, yielding substantial improvements in parameter size (of up to 75% compression) while achieving comparable (and occasionally better) performance. Contributions All in all, we make the following major contributions: • We propose Quaternion neural models for NLP. More concretely, we propose a novel Quaternion attention model and Quaternion Transformer for a wide range of NLP tasks. To the best of our knowledge, this is the first formulation of hypercomplex Attention and Quaternion models for NLP. • We evaluate our Quaternion NLP models on a wide range of diverse NLP tasks such as pairwise text classification (natural language inference, question answering, paraphrase identification, dialogue prediction), neural machine translation (NMT), sentiment analysis, mathematical language understanding (MLU), and subject-verb agreement (SVA). • Our experimental results show that Quaternion models achieve comparable or better performance to their real-valued counterparts with up to a 75% reduction in parameter costs. The key advantage is that these models are expressive (due to Hamiltons) and also parameter efficient. Moreover, our Quaternion components are self-contained and play well with real-valued counterparts. 2 Background on Quaternion Algebra This section introduces the necessary background for this paper. We introduce Quaternion algebra along with Hamilton products, which form the crux of our proposed approaches. Quaternion A Quaternion Q ∈H is a hypercomplex number with three imaginary components as follows: Q = r + xi + yj + zk, (1) where ijk = i2 = j2 = k2 = −1 and noncommutative multiplication rules apply: ij = k, jk = i, ki = j, ji = −k, kj = −i, ik = −j. In (1), r is the real value and similarly, x, y, z are real numbers that represent the imaginary components of the Quaternion vector Q. Operations on Quaternions are defined in the following. Addition The addition of two Quaternions is defined as: Q + P = Qr + Pr + (Qx + Px)i +(Qy + Py)j + (Qz + Pz)k, where Q and P with subscripts denote the real value and imaginary components of Quaternion Q and P. Subtraction follows this same principle analogously but flipping + with −. Scalar Multiplication Scalar α multiplies across all components, i.e., αQ = αr + αxi + αyj + αzk. Conjugate The conjugate of Q is defined as: Q∗= r −xi −yj −zk. Norm The unit Quaternion Q◁is defined as: Q◁= Q p r2 + x2 + y2 + z2 . Hamilton Product The Hamilton product, which represents the multiplication of two Quaternions Q and P, is defined as: Q ⊗P = (QrPr −QxPx −QyPy −QzPz) + (QxPr + QrPx −QzPy + QyPz) i + (QyPr + QzPx + QrPy −QxPz) j + (QzPr −QyPx + QxPy + QrPz) k, (2) 1496 which intuitively encourages inter-latent interaction between all the four components of Q and P. In this work, we use Hamilton products extensively for vector and matrix transformations that live at the heart of attention models for NLP. 3 Quaternion Models of Language In this section, we propose Quaternion neural models for language processing tasks. We begin by introducing the building blocks, such as Quaternion feed-forward, Quaternion attention, and Quaternion Transformers. 3.1 Quaternion Feed-Forward A Quaternion feed-forward layer is similar to a feed-forward layer in real space, while the former operates in hypercomplex space where Hamilton product is used. Denote by W ∈H the weight parameter of a Quaternion feed-forward layer and let Q ∈H be the layer input. The linear output of the layer is the Hamilton product of two Quaternions: W ⊗Q. Saving Parameters? How and Why In lieu of the fact that it might not be completely obvious at first glance why Quaternion models result in models with smaller parameterization, we dedicate the following to address this. For the sake of parameterization comparison, let us express the Hamilton product W ⊗Q in a Quaternion feed-forward layer in the form of matrix multiplication, which is used in real-space feed-forward. Recall the definition of Hamilton product in (2). Putting aside the Quaterion unit basis [1, i, j, k]⊤, W ⊗Q can be expressed as:   Wr −Wx −Wy −Wz Wx Wr −Wz Wy Wy Wz Wr −Wx Wz −Wy Wx Wr     r x y z  , (3) where W = Wr + Wxi + Wyj + Wzk and Q is defined in (1). We highlight that, there are only 4 distinct parameter variable elements (4 degrees of freedom), namely Wr, Wx, Wy, Wz, in the weight matrix (left) of (3), as illustrated by Figure 1; while in real-space feed-forward, all the elements of the weight matrix are different parameter variables (4 × 4 = 16 degrees of freedom). In other words, the degrees of freedom in Quaternion feedforward is only a quarter of those in its real-space r x x’ FRPSRQHQWVRIWKHLQSXW4XDWHUQLRQ4 y z z’ y’ r’ FRPSRQHQWVRIWKHRXWSXW4XDWHUQLRQ4Ň SDLUZLVHFRQQHFWLRQVZLWKZHLJKWSDUDPHWHUYDULDEOHV Wx Wr -Wz Wy r x y z r’ Wr -Wx -Wy -Wz Wy Wz Wr -Wx Wz -Wy Wx Wr r x x’ y z r x y z y’ r x y z z’ Figure 1: 4 weight parameter variables (Wr, Wx, Wy, Wz) are used in 16 pairwise connections between components of the input and output Quaternions. counterpart, resulting in a 75% reduction in parameterization. Such a parameterization reduction can also be explained by weight sharing (Parcollet et al., 2018b,a). Nonlinearity Nonlinearity can be added to a Quaternion feed-forward layer and componentwise activation is adopted (Parcollet et al., 2018a): α(Q) = α(r) + α(x)i + α(y)j + +α(z)k, where Q is defined in (1) and α(.) is a nonlinear function such as tanh or ReLU. 3.2 Quaternion Attention Next, we propose a Quaternion attention model to compute attention and alignment between two sequences. Let A ∈Hℓa×d and B ∈Hℓb×d be input word sequences, where ℓa, ℓb are numbers of tokens in each sequence and d is the dimension of each input vector. We first compute: E = A ⊗B⊤, where E ∈Hℓa×ℓb. We apply Softmax(.) to E component-wise: G = ComponentSoftmax(E) B′ = GRBR + GXBXi + GY BY j + GZBZk, where G and B with subscripts represent the real and imaginary components of G and B. Similarly, we perform the same on A which is described as follows: F = ComponentSoftmax(E⊤) A′ = FRAR + FXAXi + FY AY j + FZAZk, 1497 where A′ is the aligned representation of B and B′ is the aligned representation of A. Next, given A′ ∈Rℓb×d, B′ ∈RℓA×d we then compute and compare the learned alignments: C1 = X QFFN([A′ i; Bi, A′ i ⊗Bi; A′ i −Bi]) C2 = X QFFN([B′ i; Ai, B′ i ⊗Ai; B′ i −Ai]), where QFFN(.) is a Quaternion feed-forward layer with nonlinearity and [; ] is the component-wise contatentation operator. i refers to word positional indices and P over words in the sequence. Both outputs C1, C2 are then passed Y = QFFN([C1; C2; C1 ⊗C2; C1 −C2]), where Y ∈H is a Quaternion valued output. In order to train our model end-to-end with real-valued losses, we concatenate each component and pass into a final linear layer for classification. 3.3 Quaternion Transformer This section describes our Quaternion adaptation of Transformer networks. Transformer (Vaswani et al., 2017) can be considered state-of-the-art across many NLP tasks. Transformer networks are characterized by stacked layers of linear transforms along with its signature self-attention mechanism. For the sake of brevity, we outline the specific changes we make to the Transformer model. Quaternion Self-Attention The standard selfattention mechanism considers the following: A = softmax(QK⊤ √dk )V, where Q, K, V are traditionally learned via linear transforms from the input X. The key idea here is that we replace this linear transform with a Quaternion transform. Q = Wq ⊗X; K = Wk ⊗X; V = Wv ⊗X, where ⊗is the Hamilton product and X is the input Quaternion representation of the layer. In this case, since computation is performed in Quaternion space, the parameters of W is effectively reduced by 75%. Similarly, the computation of selfattention also relies on Hamilton products. The revised Quaternion self-attention is defined as follows: A = ComponentSoftmax(Q ⊗K √dk )V. (4) Note that in (4), Q ⊗K returns four ℓ× ℓ matrices (attention weights) for each component (r, i, j, k). Softmax is applied component-wise, along with multiplication with V which is multiplied in similar fashion to the Quaternion attention model. Note that the Hamilton product in the selfattention itself does not change the parameter size of the network. Quaternion Transformer Block Aside from the linear transformations for forming query, key, and values. Tranformers also contain position feed-forward networks with ReLU activations. Similarly, we replace the feed-forward connections (FFNs) with Quaternion FFNs. We denote this as Quaternion Transformer (full) while denoting the model that only uses Quaternion FFNs in the self-attention as (partial). Finally, the remainder of the Transformer networks remain identical to the original design (Vaswani et al., 2017) in the sense that component-wise functions are applied unless specified above. 3.4 Embedding Layers In the case where the word embedding layer is trained from scratch (i.e., using Byte-pair encoding in machine translation), we treat each embedding to be the concatenation of its four components. In the case where pre-trained embeddings such as GloVe (Pennington et al., 2014) are used, a nonlinear transform is used to project the embeddings into Quaternion space. 3.5 Connection to Real Components A vast majority of neural components in the deep learning arsenal operate in real space. As such, it would be beneficial for our Quaternion-inspired components to interface seamlessly with these components. If input to a Quaternion module (such as Quaternion FFN or attention modules), we simply treat the real-valued input as a concatenation of components r, x, y, z. Similarly, the output of the Quaternion module, if passed to a realvalued layer, is treated as a [r; x; y; z], where [; ] is the concatenation operator. Output layer and Loss Functions To train our model, we simply concatenate all r, i, j, k components into a single vector at the final output layer. For example, for classification, the final Softmax output is defined as following: Y = Softmax(W([r; x; y; z]) + b), 1498 where Y ∈R|C| where |C| is the number of classes and x, y, z are the imaginary components. Similarly for sequence loss (for sequence transduction problems), the same can be also done. Parameter Initialization It is intuitive that specialized initialization schemes ought to be devised for Quaternion representations and their modules (Parcollet et al., 2018b,a). w = |w|(cos(θ) + q◁ imag sin(θ), where q◁ imag is the normalized imaginary constructed from uniform randomly sampling from [0, 1]. θ is randomly and uniformly sampled from [−π, π]. However, our early experiments show that, at least within the context of NLP applications, this initialization performed comparable or worse than the standard Glorot initialization. Hence, we opt to initialize all components independently with Glorot initialization. 4 Experiments This section describes our experimental setup across multiple diverse NLP tasks. All experiments were run on NVIDIA Titan X hardware. Our Models On pairwise text classification, we benchmark Quaternion attention model (Q-Att), testing the ability of Quaternion models on pairwise representation learning. On all the other tasks, such as machine translation and subjectverb agreement, we evaluate Quaternion Transformers. We evaluate two variations of Transformers, full and partial. The full setting converts all linear transformations into Quaternion space and is approximately 25% of the actual Transformer size. The second setting (partial) only reduces the linear transforms at the self-attention mechanism. Tensor2Tensor1 is used for Transformer benchmarks, which uses its default Hyperparameters and encoding for all experiments. 4.1 Pairwise Text Classification We evaluate our proposed Quaternion attention (Q-Att) model on pairwise text classification tasks. This task involves predicting a label or ranking score for sentence pairs. We use a total of seven data sets from problem domains such as: 1https://github.com/tensorflow/ tensor2tensor. • Natural language inference (NLI) - This task is concerned with determining if two sentences entail or contradict each other. We use SNLI (Bowman et al., 2015), SciTail (Khot et al., 2018), MNLI (Williams et al., 2017) as benchmark data sets. • Question answering (QA) - This task involves learning to rank question-answer pairs. We use WikiQA (Yang et al., 2015) which comprises of QA pairs from Bing Search. • Paraphrase detection - This task involves detecting if two sentences are paraphrases of each other. We use Tweets (Lan et al., 2017) data set and the Quora paraphrase data set (Wang et al., 2017). • Dialogue response selection - This is a response selection (RS) task that tries to select the best response given a message. We use the Ubuntu dialogue corpus, UDC (Lowe et al., 2015). Implementation Details We implement Q-Att in TensorFlow (Abadi et al., 2016), along with the Decomposable Attention baseline (Parikh et al., 2016). Both models optimize the cross entropy loss (e.g., binary cross entropy for ranking tasks such as WikiQA and Ubuntu). Models are optimized with Adam with the learning rate tuned amongst {0.001, 0.0003} and the batch size tuned amongst {32, 64}. Embeddings are initialized with GloVe (Pennington et al., 2014). For QAtt, we use an additional transform layer to project the pre-trained embeddings into Quaternion space. The measures used are generally the accuracy measure (for NLI and Paraphrase tasks) and ranking measures (MAP/MRR/Top-1) for ranking tasks (WikiQA and Ubuntu). Baselines and Comparison We use the Decomposable Attention model as a baseline, adding [ai; bi; ai ⊙bi; ai −bi] before the compare2 layers since we found this simple modification to increase performance. This also enables fair comparison with our variation of Quaternion attention which uses Hamilton product over Element-wise multiplication. We denote this as DeAtt. We evaluate at a fixed representation size of d = 200 2This follows the matching function of (Chen et al., 2016). 1499 Task NLI QA Paraphrase RS Measure Accuracy MAP/MRR Accuracy Top-1 Model SNLI SciTail MNLI WikiQA Tweet Quora UDC # Params DeAtt (d = 50) 83.4 73.8 69.9/70.9 66.0/67.1 77.8 82.2 48.7 200K DeAtt (d = 200) 86.2 79.0 73.6/73.9 67.2/68.3 80.0 85.4 51.8 700K Q-Att (d = 50) 85.4 79.6 72.3/72.9 66.2/68.1 80.1 84.1 51.5 200K (-71%) Table 1: Experimental results on pairwise text classification and ranking tasks. Q-Att achieves comparable or competitive results compared with DeAtt with approximately one third of the parameter cost. Model IMDb SST # Params Transformer 82.6 78.9 400K Quaternion Transformer (full) 83.9 (+1.3%) 80.5 (+1.6%) 100K (-75.0%) Quaternion Transformer (partial) 83.6 (+1.0%) 81.4 (+2.5%) 300K (-25.0%) Table 2: Experimental results on sentiment analysis on IMDb and Stanford Sentiment Treebank (SST) data sets. Evaluation measure is accuracy. (equivalent to d = 50 in Quaternion space). We also include comparisons at equal parameterization (d = 50 and approximately 200K parameters) to observe the effect of Quaternion representations. We selection of DeAtt is owing to simplicity and ease of comparison. We defer the prospect of Quaternion variations of more advanced models (Chen et al., 2016; Tay et al., 2017b) to future work. Results Table 1 reports results on seven different and diverse data sets. We observe that a tiny Q-Att model (d = 50) achieves comparable (or occasionally marginally better or worse) performance compared to DeAtt (d = 200), gaining a 68% parameter savings. The results actually improve on certain data sets (2/7) and are comparable (often less than a percentage point difference) compared with the d = 200 DeAtt model. Moreover, we scaled the parameter size of the DeAtt model to be similar to the Q-Att model and found that the performance degrades quite significantly (about 2% −3% lower on all data sets). This demonstrates the quality and benefit of learning with Quaternion space. 4.2 Sentiment Analysis We evaluate on the task of document-level sentiment analysis which is a binary classification problem. Implementation Details We compare our proposed Quaternion Transformer against the vanilla Transformer. In this experiment, we use the tiny Transformer setting in Tensor2Tensor with a vocab size of 8K. We use two data sets, namely IMDb (Maas et al., 2011) and Stanford Sentiment Treebank (SST) (Socher et al., 2013). Results Table 2 reports results the sentiment classification task on IMDb and SST. We observe that both the full and partial variation of Quaternion Transformers outperform the base Transformer. We observe that Quaternion Transformer (partial) obtains a +1.0% lead over the vanilla Transformer on IMDb and +2.5% on SST. This is while having a 24.5% saving in parameter cost. Finally the full Quaternion version leads by +1.3%/1.6% gains on IMDb and SST respectively while maintaining a 75% reduction in parameter cost. This supports our core hypothesis of improving accuracy while saving parameter costs. 4.3 Neural Machine Translation We evaluate our proposed Quaternion Transformer against vanilla Transformer on three data sets on this neural machine translation (NMT) task. More concretely, we evaluate on IWSLT 2015 English Vietnamese (En-Vi), WMT 2016 EnglishRomanian (En-Ro) and WMT 2018 EnglishEstonian (En-Et). We also include results on the standard WMT EN-DE English-German results. Implementation Details We implement models in Tensor2Tensor and trained for 50k steps for both models. We use the default base single GPU hyperparameter setting for both models and average checkpointing. Note that our goal is not to obtain state-of-the-art models but to fairly and systematically evaluate both vanilla and Quaternion Transformers. 1500 BLEU Model IWSLT’15 En-Vi WMT’16 En-Ro WMT’18 En-Et # Params Transformer Base 28.4 22.8 14.1 44M Quaternion Transformer (full) 28.0 18.5 13.1 11M (-75%) Quaternion Transformer (partial) 30.9 22.7 14.2 29M (-32%) Table 3: Experimental results on neural machine translation (NMT). Results of Transformer Base on EN-VI (IWSLT 2015), EN-RO (WMT 2016) and EN-ET (WMT 2018). Parameter size excludes word embeddings. Our proposed Quaternion Transformer achieves comparable or higher performance with only 67.9% parameter costs of the base Transformer model. Results Table 3 reports the results on neural machine translation. On the IWSLT’15 En-Vi data set, the partial adaptation of the Quaternion Transformer outperforms (+2.5%) the base Transformer with a 32% reduction in parameter cost. On the other hand, the full adaptation comes close (−0.4%) with a 75% reduction in paramter cost. On the WMT’16 En-Ro data set, Quaternion Transformers do not outperform the base Transformer. We observe a −0.1% degrade in performance on the partial adaptation and −4.3% degrade on the full adaptation of the Quaternion Transformer. However, we note that the drop in performance with respect to parameter savings is still quite decent, e.g., saving 32% parameters for a drop of only 0.1 BLEU points. The full adaptation loses out comparatively. On the WMT’18 EnEt dataset, the partial adaptation achieves the best result with 32% less parameters. The full adaptation, comparatively, only loses by 1.0 BLEU score from the original Transformer yet saving 75% parameters. WMT English-German Notably, Quaternion Transformer achieves a BLEU score of 26.42/25.14 for partial/full settings respectively on the standard WMT 2014 En-De benchmark. This is using a single GPU trained for 1M steps with a batch size of 8192. We note that results do not differ much from other single GPU runs (i.e., 26.07 BLEU) on this dataset (Nguyen and Joty, 2019). 4.4 Mathematical Language Understanding We include evaluations on a newly released mathematical language understanding (MLU) data set (Wangperawong, 2018). This data set is a character-level transduction task that aims to test a model’s the compositional reasoning capabilities. For example, given an input x = 85, y = −523, x ∗y the model strives to decode an output of −44455. Several variations of these problems exist, mainly switching and introduction of new mathematical operators. Implementation Details We train Quaternion Transformer for 100K steps using the default Tensor2Tensor setting following the original work (Wangperawong, 2018). We use the tiny hyperparameter setting. Similar to NMT, we report both full and partial adaptations of Quaternion Transformers. Baselines are reported from the original work as well, which includes comparisons from Universal Transformers (Dehghani et al., 2018) and Adaptive Computation Time (ACT) Universal Transformers. The evaluation measure is accuracy per sequence, which counts a generated sequence as correct if and only if the entire sequence is an exact match. Results Table 4 reports our experimental results on the MLU data set. We observe a modest +7.8% accuracy gain when using the Quaternion Transformer (partial) while saving 24.5% parameter costs. Quaternion Transformer outperforms Universal Transformer and marginally is outperformed by Adaptive Computation Universal Transformer (ACT U-Transformer) by 0.5%. On the other hand, a full Quaternion Transformer still outperforms the base Transformer (+2.8%) with 75% parameter saving. 4.5 Subject Verb Agreement Additionally, we compare our Quaternion Transformer on the subject-verb agreement task (Linzen et al., 2016). The task is a binary classification problem, determining if a sentence, e.g., ‘The keys to the cabinet .’ follows by a plural/singular. Implementation We use the Tensor2Tensor framework, training Transformer and Quaternion Transformer with the tiny hyperparameter setting with 10k steps. Results Table 5 reports the results on the SVA task. Results show that Quaternion Transform1501 Model Acc / Seq # Params Universal Transformer 78.8 ACT U-Transformer 84.9 Transformer 76.1 400K Quaternion Transformer (full) 78.9 (+2.8%) 100K (-75%) Quaternion Transformer (partial) 84.4 (+8.3%) 300K ( -25%) Table 4: Experimental results on mathematical language understanding (MLU). Both Quaternion models outperform the base Transformer model with up to 75% parameter savings. ers perform equally (or better) than vanilla Transformers. On this task, the partial adaptation performs better, improving Transformers by +0.7% accuracy while saving 25% parameters. Model Acc Params Transformer 94.8 400K Quaternion (full) 94.7 100K Quaternion (partial) 95.5 300K Table 5: Experimental results on subject-verb agreement (SVA) number prediction task. 5 Related Work The goal of learning effective representations lives at the heart of deep learning research. While most neural architectures for NLP have mainly explored the usage of real-valued representations (Vaswani et al., 2017; Bahdanau et al., 2014; Parikh et al., 2016), there have also been emerging interest in complex (Danihelka et al., 2016; Arjovsky et al., 2016; Gaudet and Maida, 2017) and hypercomplex representations (Parcollet et al., 2018b,a; Gaudet and Maida, 2017). Notably, progress on Quaternion and hypercomplex representations for deep learning is still in its infancy and consequently, most works on this topic are very recent. Gaudet and Maida proposed deep Quaternion networks for image classification, introducing basic tools such as Quaternion batch normalization or Quaternion initialization (Gaudet and Maida, 2017). In a similar vein, Quaternion RNNs and CNNs were proposed for speech recognition (Parcollet et al., 2018a,b). In parallel Zhu et al. proposed Quaternion CNNs and applied them to image classification and denoising tasks (Zhu et al., 2018). Comminiello et al. proposed Quaternion CNNs for sound detection (Comminiello et al., 2018). (Zhang et al., 2019) proposed Quaternion embeddings of knowledge graphs. A common theme is that Quaternion representations are helpful and provide utility over real-valued representations. The interest in non-real spaces can be attributed to several factors. Firstly, complex weight matrices used to parameterize RNNs help to combat vanishing gradients (Arjovsky et al., 2016). On the other hand, complex spaces are also intuitively linked to associative composition, along with holographic reduced representations (Plate, 1991; Nickel et al., 2016; Tay et al., 2017a). Asymmetry has also demonstrated utility in domains such as relational learning (Trouillon et al., 2016; Nickel et al., 2016) and question answering (Tay et al., 2018). Complex networks (Trabelsi et al., 2017), in general, have also demonstrated promise over real networks. In a similar vein, the hypercomplex Hamilton product provides a greater extent of expressiveness, similar to the complex Hermitian product, albeit with a 4-fold increase in interactions between real and imaginary components. In the case of Quaternion representations, due to parameter saving in the Hamilton product, models also enjoy a 75% reduction in parameter size. Our work draws important links to multihead (Vaswani et al., 2017) or multi-sense (Li and Jurafsky, 2015; Neelakantan et al., 2015) representations that are highly popular in NLP research. Intuitively, the four-component structure of Quaternion representations can also be interpreted as some kind of multi-headed architecture. The key difference is that the basic operators (e.g., Hamilton product) provides an inductive bias that encourages interactions between these components. Notably, the idea of splitting vectors has also been explored (Daniluk et al., 2017), which is in similar spirit to breaking a vector into four components. 1502 6 Conclusion This paper advocates for lightweight and efficient neural NLP via Quaternion representations. More concretely, we proposed two models - Quaternion attention model and Quaternion Transformer. We evaluate these models on eight different NLP tasks and a total of thirteen data sets. Across all data sets the Quaternion model achieves comparable performance while reducing parameter size. All in all, we demonstrated the utility and benefits of incorporating Quaternion algebra in state-of-theart neural models. We believe that this direction paves the way for more efficient and effective representation learning in NLP. Our Tensor2Tensor implementation of Quaternion Transformers will be released at https://github.com/ vanzytay/QuaternionTransformers. 7 Acknowledgements The authors thank the anonymous reviewers of ACL 2019 for their time, feedback and comments. References Mart´ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16). pages 265–283. Martin Arjovsky, Amar Shah, and Yoshua Bengio. 2016. Unitary evolution recurrent neural networks. In International Conference on Machine Learning. pages 1120–1128. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326 . Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2016. Enhanced lstm for natural language inference. arXiv preprint arXiv:1609.06038 . Danilo Comminiello, Marco Lella, Simone Scardapane, and Aurelio Uncini. 2018. Quaternion convolutional neural networks for detection and localization of 3d sound events. arXiv preprint arXiv:1812.06811 . Ivo Danihelka, Greg Wayne, Benigno Uria, Nal Kalchbrenner, and Alex Graves. 2016. Associative long short-term memory. arXiv preprint arXiv:1602.03032 . Michał Daniluk, Tim Rockt¨aschel, Johannes Welbl, and Sebastian Riedel. 2017. Frustratingly short attention spans in neural language modeling. arXiv preprint arXiv:1702.04521 . Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. 2018. Universal transformers. arXiv preprint arXiv:1807.03819 . Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 . Chase Gaudet and Anthony Maida. 2017. Deep quaternion networks. arXiv preprint arXiv:1712.04604 . Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics, pages 873–882. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. Scitail: A textual entailment dataset from science question answering. In Thirty-Second AAAI Conference on Artificial Intelligence. Wuwei Lan, Siyu Qiu, Hua He, and Wei Xu. 2017. A continuously growing dataset of sentential paraphrases. arXiv preprint arXiv:1708.00391 . Jiwei Li and Dan Jurafsky. 2015. Do multi-sense embeddings improve natural language understanding? arXiv preprint arXiv:1506.01070 . Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntaxsensitive dependencies. Transactions of the Association for Computational Linguistics 4:521–535. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909 . Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Portland, Oregon, USA, pages 142–150. http://www.aclweb.org/anthology/P111015. 1503 Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2015. Efficient non-parametric estimation of multiple embeddings per word in vector space. arXiv preprint arXiv:1504.06654 . Phi Xuan Nguyen and Shafiq Joty. 2019. Phrase-based attentions. https://openreview.net/forum?id=r1xN5oA5tm. Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In Thirtieth Aaai conference on artificial intelligence. Titouan Parcollet, Mirco Ravanelli, Mohamed Morchid, Georges Linar`es, Chiheb Trabelsi, Renato De Mori, and Yoshua Bengio. 2018a. Quaternion recurrent neural networks. arXiv preprint arXiv:1806.04418 . Titouan Parcollet, Ying Zhang, Mohamed Morchid, Chiheb Trabelsi, Georges Linar`es, Renato De Mori, and Yoshua Bengio. 2018b. Quaternion convolutional neural networks for end-to-end automatic speech recognition. arXiv preprint arXiv:1806.07789 . Ankur P Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933 . Dario Pavllo, David Grangier, and Michael Auli. 2018. Quaternet: A quaternion-based recurrent model for human motion. arXiv preprint arXiv:1805.06485 . Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). pages 1532–1543. Tony Plate. 1991. Holographic reduced representations: Convolution algebra for compositional distributed representations. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training . Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners . Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 . Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing. pages 1631–1642. Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2018. Hermitian co-attention networks for text matching in asymmetrical domains. Yi Tay, Minh C Phan, Luu Anh Tuan, and Siu Cheung Hui. 2017a. Learning to rank question answer pairs with holographic dual lstm architecture. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, pages 695–704. Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2017b. Compare, compress and propagate: Enhancing neural architectures with alignment factorization for natural language inference. arXiv preprint arXiv:1801.00102 . Chiheb Trabelsi, Olexa Bilaniuk, Ying Zhang, Dmitriy Serdyuk, Sandeep Subramanian, Jo˜ao Felipe Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, and Christopher J Pal. 2017. Deep complex networks. arXiv preprint arXiv:1705.09792 . Th´eo Trouillon, Johannes Welbl, Sebastian Riedel, ´Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In International Conference on Machine Learning. pages 2071–2080. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. pages 5998–6008. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. arXiv preprint arXiv:1702.03814 . Artit Wangperawong. 2018. Attending to mathematical language with transformers. CoRR abs/1812.02825. http://arxiv.org/abs/1812.02825. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426 . Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 2013–2018. Shuai Zhang, Yi Tay, Lina Yao, and Qi Liu. 2019. Quaternion knowledge graph embeddings. arXiv preprint arXiv:1904.10281 . Xuanyu Zhu, Yi Xu, Hongteng Xu, and Changjian Chen. 2018. Quaternion convolutional neural networks. In Proceedings of the European Conference on Computer Vision (ECCV). pages 631–647.
2019
145
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1504–1519 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1504 Sparse Sequence-to-Sequence Models Ben Peters† Vlad Niculae† and Andr´e F. T. Martins†‡ †Instituto de Telecomunicac¸˜oes, Lisbon, Portugal ‡Unbabel, Lisbon, Portugal [email protected], [email protected], [email protected] Abstract Sequence-to-sequence models are a powerful workhorse of NLP. Most variants employ a softmax transformation in both their attention mechanism and output layer, leading to dense alignments and strictly positive output probabilities. This density is wasteful, making models less interpretable and assigning probability mass to many implausible outputs. In this paper, we propose sparse sequence-to-sequence models, rooted in a new family of α-entmax transformations, which includes softmax and sparsemax as particular cases, and is sparse for any α > 1. We provide fast algorithms to evaluate these transformations and their gradients, which scale well for large vocabulary sizes. Our models are able to produce sparse alignments and to assign nonzero probability to a short list of plausible outputs, sometimes rendering beam search exact. Experiments on morphological inflection and machine translation reveal consistent gains over dense models. 1 Introduction Attention-based sequence-to-sequence (seq2seq) models have proven useful for a variety of NLP applications, including machine translation (Bahdanau et al., 2015; Vaswani et al., 2017), speech recognition (Chorowski et al., 2015), abstractive summarization (Chopra et al., 2016), and morphological inflection generation (Kann and Sch¨utze, 2016), among others. In part, their strength comes from their flexibility: many tasks can be formulated as transducing a source sequence into a target sequence of possibly different length. However, conventional seq2seq models are dense: they compute both attention weights and output probabilities with the softmax function (Bridle, 1990), which always returns positive values. This results in dense attention alignments, in which each source position is attended to at each d r a w e d </s> n </s> </s> 66.4% 32.2% 1.4% Figure 1: The full beam search of our best performing morphological inflection model when generating the past participle of the verb “draw”. The model gives nonzero probability to exactly three hypotheses, including the correct form (“drawn”) and the form that would be correct if “draw” were regular (“drawed”). target position, and in dense output probabilities, in which each vocabulary type always has nonzero probability of being generated. This contrasts with traditional statistical machine translation systems, which are based on sparse, hard alignments, and decode by navigating through a sparse lattice of phrase hypotheses. Can we transfer such notions of sparsity to modern neural architectures? And if so, do they improve performance? In this paper, we provide an affirmative answer to both questions by proposing neural sparse seq2seq models that replace the softmax transformations (both in the attention and output) by sparse transformations. Our innovations are rooted in the recently proposed sparsemax transformation (Martins and Astudillo, 2016) and FenchelYoung losses (Blondel et al., 2019). Concretely, we consider a family of transformations (dubbed α-entmax), parametrized by a scalar α, based on the Tsallis entropies (Tsallis, 1988). This family includes softmax (α = 1) and sparsemax (α = 2) as particular cases. Crucially, entmax transforms are sparse for all α > 1. Our models are able to produce both sparse attention, a form of inductive bias that increases focus on relevant source words and makes alignments more interpretable, and sparse output probabilities, which together with auto-regressive mod1505 at on , This So And Here the tree of life . view look glimpse kind looking way vision gaze is another 92.9% 5.9% 1.3% <0.1% 49.8% 27.1% 19.9% 2.0% 0.9% 0.2% <0.1% <0.1% 95.7% 5.9% 1.3% Figure 2: Forced decoding using sparsemax attention and 1.5-entmax output for the German source sentence, “Dies ist ein weiterer Blick auf den Baum des Lebens.” Predictions with nonzero probability are shown at each time step. All other target types have probability exactly zero. When consecutive predictions consist of a single word, we combine their borders to showcase auto-completion potential. The selected gold targets are in boldface. els can lead to probability distributions that are nonzero only for a finite subset of all possible strings. In certain cases, a short list of plausible outputs can be enumerated without ever exhausting the beam (Figure 1), rendering beam search exact. Sparse output seq2seq models can also be used for adaptive, sparse next word suggestion (Figure 2). Overall, our contributions are as follows: • We propose an entmax sparse output layer, together with a natural loss function. In largevocabulary settings, sparse outputs avoid wasting probability mass on unlikely outputs, substantially improving accuracy. For tasks with little output ambiguity, entmax losses, coupled with beam search, can often produce exact finite sets with only one or a few sequences. To our knowledge, this is the first study of sparse output probabilities in seq2seq problems. • We construct entmax sparse attention, improving interpretability at no cost in accuracy. We show that the entmax gradient has a simple form (Proposition 2), revealing an insightful missing link between softmax and sparsemax. • We derive a novel exact algorithm for the case of 1.5-entmax, achieving processing speed close to softmax on the GPU, even with large vocabulary sizes. For arbitrary α, we investigate a GPU-friendly approximate algorithm.1 We experiment on two tasks: one character-level with little ambiguity (morphological inflection generation) and another word-level, with more ambiguity (neural machine translation). The results 1Our standalone Pytorch entmax implementation is available at https://github.com/deep-spin/entmax. show clear benefits of our approach, both in terms of accuracy and interpretability. 2 Background The underlying architecture we focus on is an RNNbased seq2seq with global attention and inputfeeding (Luong et al., 2015). We provide a brief description of this architecture, with an emphasis on the attention mapping and the loss function. Notation. Scalars, vectors, and matrices are denoted respectively as a, a, and A. We denote the d– probability simplex (the set of vectors representing probability distributions over d choices) by △d := {p ∈Rd : p ≥0, ∥p∥1 = 1}. We denote the positive part as [a]+ := max{a, 0}, and by [a]+ its elementwise application to vectors. We denote the indicator vector ey := [0, . . . , 0, 1 |{z} y , 0, . . . , 0]. Encoder. Given an input sequence of tokens x := [x1, . . . , xJ], the encoder applies an embedding lookup followed by K layered bidirectional LSTMs (Hochreiter and Schmidhuber, 1997), resulting in encoder states [h1, . . . , hJ]. Decoder. The decoder generates output tokens y1, . . . , yT , one at a time, terminated by a stop symbol. At each time step t, it computes a probability distribution for the next generated word yt, as follows. Given the current state st of the decoder LSTM, an attention mechanism (Bahdanau et al., 2015) computes a focused, fixed-size summary of the encodings [h1, . . . , hJ], using st as a query vector. This is done by computing token-level scores zj := s⊤ t W (z)hj, then taking a weighted average ct := J X j=1 πjhj, where π := softmax(z). (1) 1506 The contextual output is the non-linear combination ot := tanh(W (o)[st; ct] + b(o)), yielding the predictive distribution of the next word p(yt =· | x, y1, ..., yt−1) := softmax(Vot + b). (2) The output ot, together with the embedding of the predicted byt, feed into the decoder LSTM for the next step, in an auto-regressive manner. The model is trained to maximize the likelihood of the correct target sentences, or equivalently, to minimize L = X (x,y)∈D |y| X t=1 (−log softmax(Vot))yt | {z } Lsoftmax(yt,Vot) . (3) A central building block in the architecture is the transformation softmax: Rd →△d, softmax(z)j := exp(zj) P i exp(zi), (4) which maps a vector of scores z into a probability distribution (i.e., a vector in △d). As seen above, the softmax mapping plays two crucial roles in the decoder: first, in computing normalized attention weights (Eq. 1), second, in computing the predictive probability distribution (Eq. 2). Since exp ⪈0, softmax never assigns a probability of zero to any word, so we may never fully rule out non-important input tokens from attention, nor unlikely words from the generation vocabulary. While this may be advantageous for dealing with uncertainty, it may be preferrable to avoid dedicating model resources to irrelevant words. In the next section, we present a strategy for differentiable sparse probability mappings. We show that our approach can be used to learn powerful seq2seq models with sparse outputs and sparse attention mechanisms. 3 Sparse Attention and Outputs 3.1 The sparsemax mapping and loss To pave the way to a more general family of sparse attention and losses, we point out that softmax (Eq. 4) is only one of many possible mappings from Rd to △d. Martins and Astudillo (2016) introduce sparsemax: an alternative to softmax which tends to yield sparse probability distributions: sparsemax(z) := argmin p∈△d ∥p −z∥2. (5) Since Eq. 5 is a projection onto △d, which tends to yield sparse solutions, the predictive distribution p⋆:= sparsemax(z) is likely to assign exactly zero probability to low-scoring choices. They also propose a corresponding loss function to replace the negative log likelihood loss Lsoftmax (Eq. 3): Lsparsemax(y, z):= 1 2 ∥ey−z∥2−∥p⋆−z∥2 , (6) This loss is smooth and convex on z and has a margin: it is zero if and only if zy ≥zy′ + 1 for any y′ ̸= y (Martins and Astudillo, 2016, Proposition 3). Training models with the sparsemax loss requires its gradient (cf. Appendix A.2): ∇zLsparsemax(y, z) = −ey + p⋆. For using the sparsemax mapping in an attention mechanism, Martins and Astudillo (2016) show that it is differentiable almost everywhere, with ∂sparsemax(z) ∂z = diag(s) − 1 ∥s∥1 ss⊤, where sj = 1 if p⋆ j > 0, otherwise sj = 0. Entropy interpretation. At first glance, sparsemax appears very different from softmax, and a strategy for producing other sparse probability mappings is not obvious. However, the connection becomes clear when considering the variational form of softmax (Wainwright and Jordan, 2008): softmax(z) = argmax p∈△d p⊤z + HS(p), (7) where HS(p) := −P j pj log pj is the well-known Gibbs-Boltzmann-Shannon entropy with base e. Likewise, letting HG(p) := 1 2 P j pj(1 −pj) be the Gini entropy, we can rearrange Eq. 5 as sparsemax(z) = argmax p∈△d p⊤z + HG(p), (8) crystallizing the connection between softmax and sparsemax: they only differ in the choice of entropic regularizer. 3.2 A new entmax mapping and loss family The parallel above raises a question: can we find interesting interpolations between softmax and sparsemax? We answer affirmatively, by considering a generalization of the Shannon and Gini entropies proposed by Tsallis (1988): a family of entropies parametrized by a scalar α > 1 which we call Tsallis α-entropies: HT α(p):= ( 1 α(α−1) P j  pj −pα j  , α ̸= 1, HS(p), α = 1. (9) 1507 −2 0 2 t 0.0 0.5 1.0 α = 1 (softmax) α = 1.25 α = 1.5 α = 2 (sparsemax) α = 4 Figure 3: Illustration of entmax in the two-dimensional case α-entmax([t, 0])1. All mappings except softmax saturate at t = ±1/α−1. While sparsemax is piecewise linear, mappings with 1 < α < 2 have smooth corners. This family is continuous, i.e., limα→1 HT α(p) = HS(p) for any p ∈△d (cf. Appendix A.1). Moreover, HT 2 ≡HG. Thus, Tsallis entropies interpolate between the Shannon and Gini entropies. Starting from the Tsallis entropies, we construct a probability mapping, which we dub entmax: α-entmax(z) := argmax p∈△d p⊤z + HT α(p), (10) and, denoting p⋆:= α-entmax(z), a loss function Lα(y, z) := (p⋆−ey)⊤z + HT α(p⋆) (11) The motivation for this loss function resides in the fact that it is a Fenchel-Young loss (Blondel et al., 2019), as we briefly explain in Appendix A.2. Then, 1-entmax ≡softmax and 2-entmax ≡sparsemax. Similarly, L1 is the negative log likelihood, and L2 is the sparsemax loss. For all α > 1, entmax tends to produce sparse probability distributions, yielding a function family continuously interpolating between softmax and sparsemax, cf. Figure 3. The gradient of the entmax loss is ∇zLα(y, z) = −ey + p⋆. (12) Tsallis entmax losses have useful properties including convexity, differentiability, and a hingelike separation margin property: the loss incurred becomes zero when the score of the correct class is separated by the rest by a margin of 1/α−1. When separation is achieved, p⋆= ey (Blondel et al., 2019). This allows entmax seq2seq models to be adaptive to the degree of uncertainty present: decoders may make fully confident predictions at “easy” time steps, while preserving sparse uncertainty when a few choices are possible (as exemplified in Figure 2). Tsallis entmax probability mappings have not, to our knowledge, been used in attention mechanisms. They inherit the desirable sparsity of sparsemax, while exhibiting smoother, differentiable curvature, whereas sparsemax is piecewise linear. 3.3 Computing the entmax mapping Whether we want to use α-entmax as an attention mapping, or Lα as a loss function, we must be able to efficiently compute p⋆= α-entmax(z), i.e., to solve the maximization in Eq. 10. For α = 1, the closed-form solution is given by Eq. 4. For α > 1, given z, we show that there is a unique threshold τ such that (Appendix C.1, Lemma 2): α-entmax(z) = [(α −1)z −τ1] 1/α−1 + , (13) i.e., entries with score zj ≤τ/α−1 get zero probability. For sparsemax (α = 2), the problem amounts to Euclidean projection onto △d, for which two types of algorithms are well studied: i. exact, based on sorting (Held et al., 1974; Michelot, 1986), ii. iterative, bisection-based (Liu and Ye, 2009). The bisection approach searches for the optimal threshold τ numerically. Blondel et al. (2019) generalize this approach in a way applicable to α-entmax. The resulting algorithm is (cf. Appendix C.1 for details): Algorithm 1 Compute α-entmax by bisection. 1 Define p(τ) := [z −τ] 1/α−1 + , set z ←(α −1)z 2 τmin ←max(z) −1; τmax ←max(z) −d1−α 3 for t ∈1, . . . , T do 4 τ ←(τmin + τmax)/2 5 Z ←P j pj(τ) 6 if Z < 1 then τmax ←τ else τmin ←τ 7 return 1/Z p(τ) Algorithm 1 works by iteratively narrowing the interval containing the exact solution by exactly half. Line 7 ensures that approximate solutions are valid probability distributions, i.e., that p⋆∈△d. Although bisection is simple and effective, an exact sorting-based algorithm, like for sparsemax, has the potential to be faster and more accurate. Moreover, as pointed out by Condat (2016), when exact solutions are required, it is possible to construct inputs z for which bisection requires arbitrarily many iterations. To address these issues, we propose a 1508 novel, exact algorithm for 1.5-entmax, halfway between softmax and sparsemax. Algorithm 2 Compute 1.5-entmax(z) exactly. 1 Sort z, yielding z[d] ≤· · · ≤z[1]; set z ←z/2 2 for ρ ∈1, . . . , d do 3 M(ρ) ←1/ρ Pρ j=1z[j] 4 S (ρ) ←Pρ j=1 z[j] −M(ρ) 2 5 τ (ρ) ←M(ρ) − p 1/ρ (1 −S(ρ)) 6 if z[ρ+1] ≤τ(ρ) ≤z[ρ] then 7 return p⋆= [z −τ 1]2 + We give a full derivation in Appendix C.2. As written, Algorithm 2 is O(d log d) because of the sort; however, in practice, when the solution p⋆has no more than k nonzeros, we do not need to fully sort z, just to find the k largest values. Our experiments in §4.2 reveal that a partial sorting approach can be very efficient and competitive with softmax on the GPU, even for large d. Further speed-ups might be available following the strategy of Condat (2016), but our simple incremental method is very easy to implement on the GPU using primitives available in popular libraries (Paszke et al., 2017). Our algorithm resembles the aforementioned sorting-based algorithm for projecting onto the simplex (Michelot, 1986). Both algorithms rely on the optimality conditions implying an analyticallysolvable equation in τ: for sparsemax (α = 2), this equation is linear, for α = 1.5 it is quadratic (Eq. 48 in Appendix C.2). Thus, exact algorithms may not be available for general values of α. 3.4 Gradient of the entmax mapping The following result shows how to compute the backward pass through α-entmax, a requirement when using α-entmax as an attention mechanism. Proposition 1. Let α ≥1. Assume we have computed p⋆= α-entmax(z), and define the vector si = ( (p⋆ i )2−α, p⋆ i > 0, 0, otherwise. Then, ∂α-entmax(z) ∂z = diag(s) − 1 ∥s∥1 ss⊤. Proof: The result follows directly from the more general Proposition 2, which we state and prove in Appendix B, noting that  tα−t α(α−1) ′′ = tα−2. The gradient expression recovers the softmax and sparsemax Jacobians with α = 1 and α = 2, respectively (Martins and Astudillo, 2016, Eqs. 8 and 12), thereby providing another relationship between the two mappings. Perhaps more interestingly, Proposition 1 shows why the sparsemax Jacobian depends only on the support and not on the actual values of p⋆: the sparsemax Jacobian is equal for p⋆= [.99, .01, 0] and p⋆= [.5, .5, 0]. This is not the case for α-entmax with α ̸= 2, suggesting that the gradients obtained with other values of α may be more informative. Finally, we point out that the gradient of entmax losses involves the entmax mapping (Eq. 12), and therefore Proposition 1 also gives the Hessian of the entmax loss. 4 Experiments The previous section establishes the computational building blocks required to train models with entmax sparse attention and loss functions. We now put them to use for two important NLP tasks, morphological inflection and machine translation. These two tasks highlight the characteristics of our innovations in different ways. Morphological inflection is a character-level task with mostly monotonic alignments, but the evaluation demands exactness: the predicted sequence must match the gold standard. On the other hand, machine translation uses a word-level vocabulary orders of magnitude larger and forces a sparse output layer to confront more ambiguity: any sentence has several valid translations and it is not clear beforehand that entmax will manage this well. Despite the differences between the tasks, we keep the architecture and training procedure as similar as possible. We use two layers for encoder and decoder LSTMs and apply dropout with probability 0.3. We train with Adam (Kingma and Ba, 2015), with a base learning rate of 0.001, halved whenever the loss increases on the validation set. We use a batch size of 64. At test time, we select the model with the best validation accuracy and decode with a beam size of 5. We implemented all models with OpenNMT-py (Klein et al., 2017).2 In our primary experiments, we use three α values for the attention and loss functions: α = 1 (softmax), α = 1.5 (to which our novel Algorithm 2 applies), and α = 2 (sparsemax). We also investigate the effect of tuning α with increased granularity. 2Our experiment code is at https://github.com/ deep-spin/OpenNMT-entmax. 1509 α high medium output attention (avg.) (ens.) (avg.) (ens.) 1 1 93.15 94.20 82.55 85.68 1.5 92.32 93.50 83.20 85.63 2 90.98 92.60 83.13 85.65 1.5 1 94.36 94.96 84.88 86.38 1.5 94.44 95.00 84.93 86.55 2 94.05 94.74 84.93 86.59 2 1 94.59 95.10 84.95 86.41 1.5 94.47 95.01 85.03 86.61 2 94.32 94.89 84.96 86.47 UZH (2018) 96.00 86.64 Table 1: Average per-language accuracy on the test set (CoNLL–SIGMORPHON 2018 task 1) averaged or ensembled over three runs. 4.1 Morphological Inflection The goal of morphological inflection is to produce an inflected word form (such as “drawn”) given a lemma (“draw”) and a set of morphological tags ({verb, past, participle}). We use the data from task 1 of the CoNLL–SIGMORPHON 2018 shared task (Cotterell et al., 2018). shared task Training. We train models under two data settings: high (approximately 10,000 samples per language in 86 languages) and medium (approximately 1,000 training samples per language in 102 languages). We depart from previous work by using multilingual training: each model is trained on the data from all languages in its data setting. This allows parameters to be shared between languages, eliminates the need to train language-specific models, and may provide benefits similar to other forms of data augmentation (Bergmanis et al., 2017). Each sample is presented as a pair: the source contains the lemma concatenated to the morphological tags and a special language identification token (Johnson et al., 2017; Peters et al., 2017), and the target contains the inflected form. As an example, the source sequence for Figure 1 is english verb participle past d r a w. Although the set of inflectional tags is not sequential, treating it as such is simple to implement and works well in practice (Kann and Sch¨utze, 2016). All models use embedding and hidden state sizes of 300. We validate at the end of every epoch in the high setting and only once every ten epochs in medium because of its smaller size. Accuracy. Results are shown in Table 1. We report the official metric of the shared task, word accuracy averaged across languages. In addition to the average results of three individual model runs, we use an ensemble of those models, where we decode by averaging the raw probabilities at each time step. Our best sparse loss models beat the softmax baseline by nearly a full percentage point with ensembling, and up to two and a half points in the medium setting without ensembling. The choice of attention has a smaller impact. In both data settings, our best model on the validation set outperforms all submissions from the 2018 shared task except for UZH (Makarov and Clematide, 2018), which uses a more involved imitation learning approach and larger ensembles. In contrast, our only departure from standard seq2seq training is the drop-in replacement of softmax by entmax. Sparsity. Besides their accuracy, we observed that entmax models made very sparse predictions: the best configuration in Table 1 concentrates all probability mass into a single predicted sequence in 81% validation samples in the high data setting, and 66% in the more difficult medium setting. When the model does give probability mass to more than one sequence, the predictions reflect reasonable ambiguity, as shown in Figure 1. Besides enhancing interpretability, sparsity in the output also has attractive properties for beam search decoding: when the beam covers all nonzero-probability hypotheses, we have a certificate of globally optimal decoding, rendering beam search exact. This is the case on 87% of validation set sequences in the high setting, and 79% in medium. To our knowledge, this is the first instance of a neural seq2seq model that can offer optimality guarantees. 4.2 Machine Translation We now turn to a highly different seq2seq regime in which the vocabulary size is much larger, there is a great deal of ambiguity, and sequences can generally be translated in several ways. We train models for three language pairs in both directions: • IWSLT 2017 German ↔English (DE↔EN, Cettolo et al., 2017): training size 206,112. • KFTT Japanese ↔English (JA↔EN, Neubig, 2011): training size of 329,882. • WMT 2016 Romanian ↔English (RO↔EN, Bojar et al., 2016): training size 612,422, diacritics removed (following Sennrich et al., 2016b). 1510 method DEEN ENDE JAEN ENJA ROEN ENRO softmax 25.70 ± 0.15 21.86 ± 0.09 20.22 ± 0.08 25.21 ± 0.29 29.12 ± 0.18 28.12 ± 0.18 1.5-entmax 26.17 ± 0.13 22.42 ± 0.08 20.55 ± 0.30 26.00 ± 0.31 30.15 ± 0.06 28.84 ± 0.10 sparsemax 24.69 ± 0.22 20.82 ± 0.19 18.54 ± 0.11 23.84 ± 0.37 29.20 ± 0.16 28.03 ± 0.16 Table 2: Machine translation comparison of softmax, sparsemax, and the proposed 1.5-entmax as both attention mapping and loss function. Reported is tokenized test BLEU averaged across three runs (higher is better). Training. We use byte pair encoding (BPE; Sennrich et al., 2016a) to ensure an open vocabulary. We use separate segmentations with 25k merge operations per language for RO↔EN and a joint segmentation with 32k merges for the other language pairs. DE↔EN is validated once every 5k steps because of its smaller size, while the other sets are validated once every 10k steps. We set the maximum number of training steps at 120k for RO↔EN and 100k for other language pairs. We use 500 dimensions for word vectors and hidden states. Evaluation. Table 2 shows BLEU scores (Papineni et al., 2002) for the three models with α ∈{1, 1.5, 2}, using the same value of α for the attention mechanism and loss function. We observe that the 1.5-entmax configuration consistently performs best across all six choices of language pair and direction. These results support the notion that the optimal function is somewhere between softmax and sparsemax, which motivates a more fine-grained search for α; we explore this next. Fine-grained impact of α. Algorithm 1 allows us to further investigate the marginal effect of varying the attention α and the loss α, while keeping the other fixed. We report DEEN validation accuracy on a fine-grained α grid in Figure 4. On this dataset, moving from softmax toward sparser attention (left) has a very small positive effect on accuracy, suggesting that the benefit in interpretability does not hurt accuracy. The impact of the loss function α (right) is much more visible: there is a distinct optimal value around α = 1.33, with performance decreasing for too large values. Interpolating between softmax and sparsemax thus inherits the benefits of both, and our novel Algorithm 2 for α = 1.5 is confirmed to strike a good middle ground. This experiment also confirms that bisection is effective in practice, despite being inexact. Extrapolating beyond the sparsemax loss (α > 2) does not seem to perform well. Sparsity. In order to form a clearer idea of how sparse entmax becomes, we measure the average method # attended # target words softmax 24.25 17993 1.5-entmax 5.55 16.13 sparsemax 3.75 7.55 Table 3: Average number of nonzeros in the attention and output distributions for the DEEN validation set. number of nonzero indices on the DEEN validation set and show it in Table 3. As expected, 1.5-entmax is less sparse than sparsemax as both an attention mechanism and output layer. In the attention mechanism, 1.5-entmax’s increased support size does not come at the cost of much interpretability, as Figure 5 demonstrates. In the output layer, 1.5-entmax assigns positive probability to only 16.13 target types out of a vocabulary of 17,993, meaning that the supported set of words often has an intuitive interpretation. Figure 2 shows the sparsity of the 1.5-entmax output layer in practice: the support becomes completely concentrated when generating a phrase like “the tree of life”, but grows when presenting a list of synonyms (“view”, “look”, “glimpse”, and so on). This has potential practical applications as a predictive translation system (Green et al., 2014), where the model’s support set serves as a list of candidate auto-completions at each time step. Training time. Importantly, the benefits of sparsity do not come at a high computational cost. Our proposed Algorithm 2 for 1.5-entmax runs on the GPU at near-softmax speeds (Figure 6). For other α values, bisection (Algorithm 1) is slightly more costly, but practical even for large vocabulary sizes. On DEEN, bisection is capable of processing about 10,500 target words per second on a single Nvidia GeForce GTX 1080 GPU, compared to 13,000 words per second for 1.5-entmax with Algorithm 2 and 14,500 words per second with softmax. On the smaller-vocabulary morphology datasets, Algorithm 2 is nearly as fast as softmax. 1511 1.00 1.25 1.50 1.75 2.00 2.25 attention α 60% 61% 62% 63% validation accuracy 1.00 1.25 1.50 1.75 2.00 2.25 output α Figure 4: Effect of tuning α on DEEN, for attention (left) and for output (right), while keeping the other α = 1.5. Aber wir beginnen , eine Veränderung zu sehen . But we start to see a change . </s> Figure 5: Attention weights produced by the DEEN 1.5-entmax model. Nonzero weights are outlined. 5 Related Work Sparse attention. Sparsity in the attention and in the output have different, but related, motivations. Sparse attention can be justified as a form of inductive bias, since for tasks such as machine translation one expects only a few source words to be relevant for each translated word. Dense attention probabilities are particularly harmful for long sequences, as shown by Luong et al. (2015), who propose “local attention” to mitigate this problem. Combining sparse attention with fertility constraints has been recently proposed by Malaviya et al. (2018). Hard attention (Xu et al., 2015; Aharoni and Goldberg, 2017; Wu et al., 2018) selects exactly one source token. Its discrete, non-differentiable nature requires imitation learning or Monte Carlo policy gradient approximations, which drastically complicate training. In contrast, entmax is a differentiable, easy to use, drop-in softmax replacement. A recent study by Jain and Wallace (2019) tackles the limitations of attention probabilities to provide interpretability. They only study dense attention in classification tasks, where attention is less crucial for the final predictions. In their conclusions, the authors defer 2000 4000 6000 seconds 57.0% 58.5% 60.0% 61.5% 63.0% validation accuracy softmax 1.5-entmax Figure 6: Training timing on three DEEN runs. Markers show validation checkpoints for one of the runs. to future work exploring sparse attention mechanisms and seq2seq models. We believe our paper can foster interesting investigation in this area. Losses for seq2seq models. Mostly motivated by the challenges of large vocabulary sizes in seq2seq, an important research direction tackles replacing the cross-entropy loss with other losses or approximations (Bengio and Sen´ecal, 2008; Morin and Bengio, 2005; Kumar and Tsvetkov, 2019). While differently motivated, some of the above strategies (e.g., hierarchical prediction) could be combined with our proposed sparse losses. Niculae et al. (2018) use sparsity to predict interpretable sets of structures. Since auto-regressive seq2seq makes no factorization assumptions, their strategy cannot be applied without approximations, such as in Edunov et al. (2018). 6 Conclusion and Future Work We proposed sparse sequence-to-sequence models and provided fast algorithms to compute their attention and output transformations. Our approach yielded consistent improvements over dense models on morphological inflection and machine translation, while inducing interpretability in both attention and output distributions. Sparse output layers also provide exactness when the number of possible hypotheses does not exhaust beam search. Given the ubiquity of softmax in NLP, entmax has many potential applications. A natural next step is to apply entmax to self-attention (Vaswani et al., 1512 2017). In a different vein, the strong morphological inflection results point to usefulness in other tasks where probability is concentrated in a small number of hypotheses, such as speech recognition. Acknowledgments This work was supported by the European Research Council (ERC StG DeepSPIN 758969), and by the Fundac¸˜ao para a Ciˆencia e Tecnologia through contracts UID/EEA/50008/2019 and CMUPERI/TIC/0046/2014 (GoLocal). We thank Mathieu Blondel, Nikolay Bogoychev, Gonc¸alo Correia, Erick Fonseca, Pedro Martins, Tsvetomila Mihaylova, Miguel Rios, Marcos Treviso, and the anonymous reviewers, for helpful discussion and feedback. References Roee Aharoni and Yoav Goldberg. 2017. Morphological inflection generation with hard monotonic attention. In Proc. ACL. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR. Yoshua Bengio and Jean-S´ebastien Sen´ecal. 2008. Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Transactions on Neural Networks, 19(4):713–722. Toms Bergmanis, Katharina Kann, Hinrich Sch¨utze, and Sharon Goldwater. 2017. Training data augmentation for low-resource morphological inflection. In Proc. CoNLL–SIGMORPHON. Mathieu Blondel, Andr´e FT Martins, and Vlad Niculae. 2019. Learning classifiers with Fenchel-Young losses: Generalized entropies, margins, and algorithms. In Proc. AISTATS. Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, et al. 2016. Findings of the 2016 conference on machine translation. In ACL WMT. John S Bridle. 1990. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In Neurocomputing, pages 227–236. Springer. M Cettolo, M Federico, L Bentivogli, J Niehues, S St¨uker, K Sudoh, K Yoshino, and C Federmann. 2017. Overview of the IWSLT 2017 evaluation campaign. In Proc. IWSLT. Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proc. NAACLHLT. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. In Proc. NeurIPS. Laurent Condat. 2016. Fast projection onto the simplex and the ℓ1 ball. Mathematical Programming, 158(12):575–585. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, G˙eraldine Walther, Ekaterina Vylomova, Arya D McCarthy, Katharina Kann, Sebastian Mielke, Garrett Nicolai, Miikka Silfverberg, et al. 2018. The CoNLL–SIGMORPHON 2018 shared task: Universal morphological reinflection. Proc. CoNLL– SIGMORPHON. John M Danskin. 1966. The theory of max-min, with applications. SIAM Journal on Applied Mathematics, 14(4):641–664. Sergey Edunov, Myle Ott, Michael Auli, David Grangier, et al. 2018. Classical structured prediction losses for sequence to sequence learning. In Proc. NAACLHLT. Spence Green, Sida I Wang, Jason Chuang, Jeffrey Heer, Sebastian Schuster, and Christopher D Manning. 2014. Human effort and machine learnability in computer aided translation. In Proc. EMNLP. Michael Held, Philip Wolfe, and Harlan P Crowder. 1974. Validation of subgradient optimization. Mathematical Programming, 6(1):62–88. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. In Proc. NAACL-HLT. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, et al. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Katharina Kann and Hinrich Sch¨utze. 2016. MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection. In Proc. SIGMORPHON. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. ICLR. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. arXiv e-prints. 1513 Sachin Kumar and Yulia Tsvetkov. 2019. Von MisesFisher loss for training sequence to sequence models with continuous outputs. In Proc. ICLR. Jun Liu and Jieping Ye. 2009. Efficient Euclidean projections in linear time. In Proc. ICML. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proc. EMNLP. Peter Makarov and Simon Clematide. 2018. UZH at CoNLL–SIGMORPHON 2018 shared task on universal morphological reinflection. Proc. CoNLL– SIGMORPHON. Chaitanya Malaviya, Pedro Ferreira, and Andr´e FT Martins. 2018. Sparse and constrained attention for neural machine translation. In Proc. ACL. Andr´e FT Martins and Ram´on Fernandez Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. In Proc. of ICML. Christian Michelot. 1986. A finite algorithm for finding the projection of a point onto the canonical simplex of Rn. Journal of Optimization Theory and Applications, 50(1):195–200. Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Proc. AISTATS. Graham Neubig. 2011. The Kyoto free translation task. http://www.phontron.com/kftt. Vlad Niculae and Mathieu Blondel. 2017. A regularized framework for sparse and structured neural attention. In Proc. NeurIPS. Vlad Niculae, Andr´e FT Martins, Mathieu Blondel, and Claire Cardie. 2018. SparseMAP: Differentiable sparse structured inference. In Proc. ICML. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. ACL. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In Proc. NeurIPS Autodiff Workshop. Ben Peters, Jon Dehdari, and Josef van Genabith. 2017. Massively multilingual neural grapheme-tophoneme conversion. In Proc. Workshop on Building Linguistically Generalizable NLP Systems. R Tyrrell Rockafellar. 1970. Convex Analysis. Princeton University Press. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Neural machine translation of rare words with subword units. In Proc. ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Edinburgh neural machine translation systems for WMT 16. In Proc. WMT. Constantino Tsallis. 1988. Possible generalization of Boltzmann-Gibbs statistics. Journal of Statistical Physics, 52:479–487. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. NeurIPS. Martin J Wainwright and Michael I Jordan. 2008. Graphical models, exponential families, and variational inference. Foundations and Trends® in Machine Learning, 1(1–2):1–305. Shijie Wu, Pamela Shapiro, and Ryan Cotterell. 2018. Hard non-monotonic attention for character-level transduction. In Proc. EMNLP. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proc. ICML. 1514 A Background A.1 Tsallis entropies Recall the definition of the Tsallis family of entropies in Eq. 9 for α ≥1, HT α(p) := ( 1 α(α−1) P j  pj −pα j  , if α ̸= 1, HS(p), if α = 1. (14) This family is continuous in α, i.e., lim α→1 HT α(p) = HT 1(p) for any p ∈△d. Proof: We rewrite HT α in separable form: HT α(p) = X j hα(pj), with hα(t) := ( t−tα α(α−1), α > 1, −t log t, α = 1. (15) It suffices to show that limα→1 hα(t) = h1(t) for t ∈[0, 1]. Let f(α) := t −tα, and g(α) := α(α −1). Note that f(1) g(1) = 0/0, leading to an indetermination. Take the derivatives of f and g: f′(α) = 0 −(exp(log tα))′ = −exp(log tα) · (α log t)′ = −tα log t, (16) and g′(α) = 2α −1. From l’Hˆopital’s rule, lim α→1 f(α) g(α) = lim α→1 f′(α) g′(α) = −t log t = h1(t). Note also that, as α →∞, the denominator grows unbounded, so HT ∞≡0. A.2 Fenchel-Young losses In this section, we recall the definitions and properties essential for our construction of α-entmax. The concepts below were formalized by Blondel et al. (2019) in more generality; we present below a less general version, sufficient for our needs. Definition 1 (Probabilistic prediction function regularized by Ω). Let Ω: △d →R ∪{∞} be a strictly convex regularization function. We define the prediction function πΩas πΩ(z) ∈argmax p∈△d p⊤z −Ω(p)  (17) Definition 2 (Fenchel-Young loss generated by Ω). Let Ω: △d →R ∪{∞} be a strictly convex regularization function. Let y ∈△denote a groundtruth label (for example, y = ey if there is a unique correct class y). Denote by z ∈Rd the prediction scores produced by some model, and by p⋆:= πΩ(z) the probabilistic predictions. The Fenchel-Young loss LΩ: Rd × △→R+ generated by Ωis LΩ(z; y) := Ω(y) −Ω(p⋆) + z⊤(p⋆−y). (18) This justifies our choice of entmax mapping and loss (Eqs. 10–11), as π−HTα = α-entmax and L−HTα = Lα. Properties of Fenchel-Young losses. 1. Non-negativity. LΩ(z; y) ≥0 for any z ∈ Rd and y ∈△d. 2. Zero loss. LΩ(z; y) = 0 if and only if y = πΩ(z), i.e., the prediction is exactly correct. 3. Convexity. LΩis convex in z. 4. Differentiability. LΩis differentiable with ∇LΩ(z; y) = p⋆−y. 5. Smoothness. If Ωis strongly convex, then LΩis smooth. 6. Temperature scaling. For any constant t > 0, LtΩ(z; y) = tLΩ(z/t; y). Characterizing the solution p⋆of πΩ(z). To shed light on the generic probability mapping in Eq. 17, we derive below the optimality conditions characterizing its solution. The optimality conditions are essential not only for constructing algorithms for computing p⋆(Appendix C), but also for deriving the Jacobian of the mapping (Appendix B). The Lagrangian of the maximization in Eq. 17 is L(p, ν, τ) = Ω(p) −(z + ν)⊤p + τ(1⊤p −1). (19) with subgradient ∂pL(p, ν, τ) = ∂Ω(p) −z −ν + τ1. (20) The subgradient KKT conditions are therefore:            z + ν −τ1 ∈∂Ω(p) p⊤ν = 0 p ∈△d ν ≥0. (21) (22) (23) (24) 1515 Connection to softmax and sparsemax. We may now directly see that, when Ω(p) := P j pj log pj, Eq. 21 becomes log pj = zj + νj − τ −1, which can only be satisfied if pj > 0, thus ν = 0. Then, pj = exp(zj)/Z, where Z := exp(τ + 1). From Eq. 23, Z must be such that pj sums to 1, yielding the well-known softmax expression. In the case of sparsemax, note that for any p ∈△d, we have Ω(p) = −HG(p) = 1/2 X j pj(pj −1) = 1/2∥p∥2 −1 2 X j pj | {z } =1 = 1/2∥p∥2 + const. (25) Thus, argmax p∈△d p⊤z + HG(p) = argmin p∈△d 0.5  ∥p∥2 −2p⊤z +∥z∥2  = argmin p∈△d ∥p −z∥2. (26) B Backward pass for generalized sparse attention mappings When a mapping πΩis used inside the computation graph of a neural network, the Jacobian of the mapping has the important role of showing how to propagate error information, necessary when training with gradient methods. In this section, we derive a new, simple expression for the Jacobian of generalized sparse mappings πΩ. We apply this result to obtain a simple form for the Jacobian of α-entmax mappings. The proof is in two steps. First, we prove a lemma that shows that Jacobians are zero outside of the support of the solution. Then, to complete the result, we characterize the Jacobian on the support. Lemma 1 (Sparse attention mechanisms have sparse Jacobians). Let Ω: Rd →R be strongly convex. The attention mapping πΩis differentiable almost everywhere, with Jacobian ∂πΩ ∂z symmetric and satisfying ∂(πΩ(z))i ∂zj = 0 if πΩ(z)  i = 0 or πΩ(z)  j = 0. Proof: Since Ωis strictly convex, the argmax in Eq. 17 is unique. Using Danskin’s theorem (Danskin, 1966), we may write πΩ(z) = ∇max p∈△  p⊤z −Ω(p)  = ∇Ω∗(z). Since Ωis strongly convex, the gradient of its conjugate Ω∗is differentiable almost everywhere (Rockafellar, 1970). Moreover, ∂πΩ ∂z is the Hessian of Ω∗, therefore symmetric, proving the first two claims. Recall the definition of a partial derivative, ∂(πΩ(z))i ∂zj = lim ε→0 1 ε (πΩ(z + εej)i −πΩ(z)i) . Denote by p⋆:= πΩ(z). We will show that for any j such that p⋆ j = 0, and any ε ≥0, πΩ(z −εej) = πΩ(z) = p⋆. In other words, we consider only one side of the limit, namely subtracting a small non-negative ε. A vector p⋆solves the optimization problem in Eq. 17 if and only if there exists ν⋆∈Rd and τ ⋆∈R satisfying Eqs. 21–24. Let νε := ν⋆+ εej. We verify that (p⋆, νε, τ ⋆) satisfies the optimality conditions for πΩ(z −εej), which implies that πΩ(z −εej) = πΩ(z). Since we add a nonnegative quantity to ν⋆, which is non-negative to begin with, (νε)j ≥0, and since p⋆ j = 0, we also satisfy p⋆ j(νε)j = 0. Finally, z −εej + νε −τ ⋆1 =z −εej + ν⋆+ εej −τ ⋆1 ∈∂Ω(p⋆). (27) It follows that lim ε→0− 1 ε (πΩ(z + εej)i −πΩ(z)i) = 0. (28) If πΩis differentiable at z, this one-sided limit must agree with the derivative. Otherwise, the sparse one-sided limit is a generalized Jacobian. Proposition 2. Let p⋆:= πΩ(z), with strongly convex and differentiable Ω. Denote the support of p⋆by S =  j ∈{1, . . . , d} : pj > 0 . If the second derivative hij = ∂2Ω ∂pi∂pj (p⋆) exists for any i, j ∈S, then ∂πΩ ∂z = S − 1 ∥s∥1 ss⊤ (29) 1516 where Sij = ( H−1 ij , i, j ∈S, 0, o.w. , and s = S1. (30) In particular, if Ω(p) = P j g(pj) with g twice differentiable on (0, 1], we have ∂πΩ ∂z = diag s − 1 ∥s∥1 ss⊤ (31) where si = (g′′(p⋆ i ) −1, i ∈S, 0, o.w. (32) Proof: Lemma 1 verifies that ∂(πΩ)i ∂zj = 0 for i, j /∈S. It remains to find the derivatives w.r.t. i, j ∈S. Denote by ¯p⋆, ¯z the restriction of the corresponding vectors to the indices in the support S. The optimality conditions on the support are  g(¯p) + τ1 = ¯z 1⊤¯p = 1 (33) where g(¯p) := ∇Ω(p)  S, so ∂g ∂¯p(¯p⋆) = H. Differentiating w.r.t. ¯z at p⋆yields  H ∂¯p ∂¯z + 1 ∂τ ∂¯z = I 1⊤∂¯p ∂¯z = 0 (34) Since Ωis strictly convex, H is invertible. From Gaussian elimination (i.e., the Schur complement), ∂τ ∂¯z = − 1 1⊤H−111⊤H−1, which can then be used to solve for ∂¯p ∂¯z giving ∂¯p ∂¯z = H−1 − 1 1⊤H−11H−111⊤H−1, yielding the desired result. When Ωis separable, H is diagonal, with Hii = g′′(p⋆ i ), yielding the simplified expression which completes the proof. Connection to other differentiable attention results. Our result is similar, but simpler than Niculae and Blondel (2017, Proposition 1), especially in the case of separable Ω. Crucially, our result does not require that the second derivative exist outside of the support. As such, unlike the cited work, our result is applicable in the case of α-entmax, where either g′′(t) = tα−2 or its reciprocal may not exist at t = 0. C Algorithms for entmax C.1 General thresholded form for bisection algorithms. The following lemma provides a simplified form for the solution of α-entmax. Lemma 2. For any z ∈Rd, there is a unique τ ⋆∈R such that α-entmax(z) = [(α −1)z −τ ⋆1] 1/α−1 + . (35) Proof: We use the regularized prediction functions defined in Appendix A.2. From both definitions, α-entmax(z) ≡π−HTα(z). We first note that for all p ∈△d, −(α −1)HT α(p) = 1 α d X i=1 pα i + const. (36) From the constant invariance and scaling of πΩ (Blondel et al., 2019, Proposition 1, items 4–5), π−HTα(z) = πΩ((α −1)z), (37) with Ω(p) = d X j=1 g(pj), g(t) = tα α . (38) Using (Blondel et al., 2019, Proposition 5), noting that g′(t) = tα−1 and (g′)−1(u) = u 1/α−1, yields πΩ(z) = [z −τ ⋆1] 1/α−1 + , (39) and therefore α-entmax(z) = [(α −1)z −τ ⋆1] 1/α−1 + . (40) Uniqueness of τ ⋆follows from the fact that α-entmax has a unique solution p⋆, and Eq. 40 implies a one-to-one mapping between p⋆and τ ⋆, as long as p⋆∈△. Corollary 2.1. For α = 1.5, Lemma 2 implies existence of a unique τ ⋆such that 1.5-entmax(z) = [z/2 −τ ⋆1]2 +. 1517 C.2 An exact algorithm for entmax with α = 1.5: Derivation of Algorithm 2. In this section, we derive an exact, sorting-based algorithm for 1.5-entmax. The key observation is that the solution can be characterized by the size of its support, ρ⋆= ∥p⋆∥0. Then, we can simply enumerate all possible values of ρ ∈{1, . . . , d} until the solution verifies all optimality conditions. The challenge, however, is expressing the threshold τ as a function of the support size ρ; for this, we rely on α = 1.5. Proposition 3. Exact computation of 1.5-entmax(z) Let z[d] ≤· · · ≤z[1] denote the sorted coordinates of z, and, for convenience, let z[d+1] := −∞. Define the top-ρ mean, unnormalized variance, and induced threshold for ρ ∈{1, . . . , d} as Mz(ρ) := 1 ρ ρ X j=1 z[j], Sz(ρ) := ρ X j=1 z[j] −Mz(ρ) 2 , τz(ρ) := ( Mz(ρ) − q 1−Sz(ρ) ρ , Sz(ρ) ≤1, +∞, Sz(ρ) > 1. Then, (1.5-entmax(z))i = hzi 2 −τz/2(ρ) i2 + , (41) for any ρ satisfying τz(ρ) ∈[z[ρ+1], z[ρ]]. Proposition 3 implies the correctness of Algorithm 2. To prove it, we first show the following. Lemma 3. Define τ(ρ) as in Proposition 3. Then, τ is non-decreasing, and there exists ρmax ∈ {1, . . . , d} such that τ is finite for 1 ≤ρ ≤ρmax, and infinite for ρ > ρmax. The proof is slightly more technical, and we defer it to after the proof of the proposition. Proof of Proposition 3. First, using Corollary 2.1 we reduce the problem of computing 1.5-entmax to πΩ(z) := argmax p∈△d p⊤z − X j 2/3 p 3/2 j . (42) Denote by τ ⋆the optimal threshold as defined in the corollary. We will show that τ ⋆= τ(ρ) for any ρ satisfying τ(ρ) ∈[z[ρ+1], z[ρ]], where we assume, for convenience, z[d+1] = −∞. The generic stationarity condition in Eq. 21, applied to the problem in Eq. 42, takes the form √pj = νj + zj −τ ∀0 < j ≤d (43) Since Ωis symmetric, πΩ is permutationpreserving (Blondel et al., 2019, Proposition 1, item 1), so we may assume w.l.o.g. that z is sorted non-increasingly, i.e., z1 ≥· · · ≥zd; in other words, zj = z[j]. Therefore, the optimal p is also non-increasing. Denote by ρ an index such as pj ≥0 for 1 ≤j ≤ρ, and pj = 0 for j > ρ. From the complementary slackness condition (22), νj = 0 for 1 ≤j ≤ρ, thus we may split the stationarity conditions (43) into (√pj = zj −τ, ∀1 ≤j ≤ρ, νj = τ −zj, ∀ρ < j ≤d. (44) (45) For (44) to have solutions, the r.h.s. must be nonnegative, i.e., τ ≤zj for j ≤ρ, so τ ≤zρ. At the same time, from dual feasability (24) we have νj = τ −zj ≥0 for j > ρ, therefore τ(ρ) ∈[zρ+1, zρ]. (46) Given ρ, we can solve for τ using (44) and primal feasability (23) 1 = d X j=1 pj = ρ X j=1 (zj −τ)2. (47) Expanding the squares and dividing by 2ρ yields the quadratic equation 1 2τ 2 − Pρ j=1 zj ρ τ + Pρ j=1 z2 j −1 2ρ = 0, (48) with discriminant ∆(ρ) = M(ρ) 2 − Pρ j=1 z2 j ρ + 1 ρ = 1 −S(ρ) ρ . (49) where we used the variance expression E h (X −E[X])2i = E[X2] −E[X]2. If S(ρ) > 1, ∆(ρ) < 0, so there must exist an optimal ρ satisfying S(ρ) ∈[0, 1]. Therefore, (48) has the two solutions τ±(ρ) = M(ρ) ± q 1−S(ρ) ρ . However, τ+ leads to a contradiction: The mean M(ρ) is never smaller than the smallest averaged term, so M(ρ) ≥zρ, and thus τ+ ≥zρ. At the same time, from (46), τ ≤zρ, so τ must equal zρ, which can only happen if M(ρ) = zρ and 1518 S(ρ) = 1. But M(ρ) = zρ only if z1 = · · · = zρ, in which case S(ρ) = 0 (contradiction). Therefore, τ ⋆= τ(ρ) = M(ρ) − q 1−S(ρ) ρ for some ρ verifying (46). It remains to show that any such ρ leads to the same value of τ(ρ). Pick any ρ1 < ρ2, both verifying (46). Therefore, ρ1 + 1 ≤ ρ2 and zρ1+1 ≤ |{z} (46) for ρ1 τ(ρ1) ≤ |{z} Lemma 3 τ(ρ2) ≤ |{z} (46) for ρ2 zρ2 ≤ |{z} z sorted zρ1+1, (50) thus τ(ρ1) = τ(ρ2), and so any ρ verifying (46) satisfies τ ⋆= τ(ρ), concluding the proof. Proof of Lemma 3. We regard τ(ρ) as an extended-value sequence, i.e., a function from N →R ∪∞. The lemma makes a claim about the domain of the sequence τ, and a claim about its monotonicity. We prove the two in turn. Domain of τ. The threshold τ(ρ) is only finite for ρ ∈T :=  ρ ∈{1, . . . , d}: S(ρ) ≤1 , i.e., where (1−S(ρ))/ρ ≥0. We show there exists ρmax such that T = {1, . . . , ρmax}. Choose ρmax as the largest index satisfying S(ρmax) ≤1. By definition, ρ > ρmax implies ρ /∈T. Remark that S(1) = 0, and S(ρ+1)−S(ρ) = (·)2 ≥0. Therefore, S is nondecreasing and, for any 1 ≤ρ ≤ ρmax, 0 ≤S(ρ) ≤1. Monotonicity of τ. Fix ρ ∈[ρmax −1], assume w.l.o.g. that Mz(ρ) = 0, and define ˜z as ˜z[j] = ( x, j = ρ + 1, z[j], otherwise. The ρ highest entries of ˜z are the same as in z, so M˜z(ρ) = Mz(ρ) = 0, S˜z(ρ) = Sz(ρ), and τ˜z(ρ) = τz(ρ). Denote eτ(x) := τ˜z(ρ + 1), and analogously f M(x) and eS(x). Then, τz(ρ+1) = eτ(z[ρ+1]) ≥ min x: eS(x)∈[0,1] eτ(x) =: eτ(x⋆) (51) We seek the lower bound eτ(x⋆) and show that eτ(x⋆) ≥τz(ρ). From (51), this implies τz(ρ + 1) ≥τz(ρ) and, by transitivity, the monotonicity of τz. It is easy to verify that the following incremental update expressions hold. f M(x) = x ρ + 1, eS(x) = Sz(ρ) + ρ ρ + 1x2. (52) We must solve the optimization problem minimizex eτ(x) subject to eS(x) ∈[0, 1]. (53) The objective value is eτ(x) = f M(x) − s 1 −eS(x) ρ + 1 = 1 ρ + 1  x − q1 −Sz(ρ)  (ρ + 1) −ρx2  (54) Ignoring the constraint for a moment and setting the gradient to 0 yields the solution 0 = eτ ′(x⋆) = 1 ρ + 1  1 + ρx⋆ q1 −Sz(ρ)  (ρ + 1) −ρx⋆2   ⇐⇒ρx⋆= − q1 −Sz(ρ)  (ρ + 1) −ρx⋆2, (55) implying x⋆< 0. Squaring both sides and rearranging yields the solution of the unconstrained optimization problem, x⋆= − s 1 −Sz(ρ) ρ . (56) We verify that x⋆readily satisfies the constraints, thus it is a solution to the minimization in Eq. 53. Since eS(x⋆) = Sz(ρ) + 1 −Sz(ρ) ρ + 1 , (57) we have eS(x⋆) ≥Sz(ρ) ≥0 (58) and eS(x⋆) ≤Sz(ρ) + 1 −Sz(ρ) 2 ≤1. (59) 1519 Plugging x⋆into the objective yields eτ(x⋆) = 1 ρ + 1 − s 1 −Sz(ρ) ρ − q ρ 1 −Sz(ρ)  ! = − s 1 −Sz(ρ) ρ 1 ρ + 1(1 + ρ) = − s 1 −Sz(ρ) ρ = τz(ρ). (60) Therefore, eτ(x) ≥τz(ρ) for any valid x, proving that τz(ρ) ≤τz(ρ + 1).
2019
146
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1520–1529 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1520 On the Robustness of Self-Attentive Models Yu-Lun Hsieh1,2, Minhao Cheng3, Da-Cheng Juan4, Wei Wei4, Wen-Lian Hsu1,5, Cho-Jui Hsieh3,4 1SNHCC, TIGP, Academia Sinica, Taiwan 2National Chengchi University, Taiwan 3University of California, Los Angeles, USA 4Google Research, USA 5PAIR Labs, Ministry of Science and Technology, Taiwan [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract This work examines the robustness of selfattentive neural networks against adversarial input perturbations. Specifically, we investigate the attention and feature extraction mechanisms of state-of-the-art recurrent neural networks and self-attentive architectures for sentiment analysis, entailment and machine translation under adversarial attacks. We also propose a novel attack algorithm for generating more natural adversarial examples that could mislead neural models but not humans. Experimental results show that, compared to recurrent neural models, self-attentive models are more robust against adversarial perturbation. In addition, we provide theoretical explanations for their superior robustness to support our claims. 1 Introduction Self-attentive neural models have recently become a prominent component that achieves state-of-theart performances on many natural language processing (NLP) tasks such as text classification and machine translation (MT). This type of models, including Transformer (Vaswani et al., 2017) and “Bidirectional Encoder Representations from Transformers,” shortened as BERT (Devlin et al., 2019), rely on the attention mechanism (Luong et al., 2015) to learn a context-dependent representation; compared to recurrent neural networks (RNN), these self-attention-based models have faster encoding speed and the capacity of modeling a wider context. Particularly, BERT is recently proposed to extend the directionality of the Transformer model, and “pre-trained” using multiple objectives to strengthen its encoding capability. Then, this pre-trained model can be fine-tuned for various downstream tasks. BERT achieves state-of-the-art performance on several NLP tasks including classification and sequence-to-sequence problems, often outperforming task-specific feature engineering or model architecture; therefore, BERT is poised to be a key component in almost every neural model for NLP tasks. Despite the superior performance, it remains unclear whether the self-attentive structure deployed by Transformer or BERT is robust to adversarial attacks compared with other neural networks. Adversarial attack refers to applying a small perturbation on the model input to craft an adversarial example, ideally imperceptible by humans, and cause the model to make an incorrect prediction (Goodfellow et al., 2015). Unlike computer vision models, generating an effective, textual adversarial example that misleads a model but can go unnoticed by humans is a challenging and thriving research problem (Alzantot et al., 2018). Therefore, the goal of this paper is to answer the following questions: “Are self-attentive models more robust to adversarial examples compared with recurrent models? If so, why?” “Do attention scores expose vulnerability in these selfattentive models?” This work verifies the robustness of selfattentive models through performing adversarial attacks and analyzing their effects on the model prediction. In addition, we investigate the feasibility of utilizing the context-dependent embeddings in these models to maximize semantic similarity between real and adversarial sentences. We conduct experiments on two mainstream self-attentive models: (a) Transformer for neural machine translation, and (b) BERT for sentiment and entailment classification. To the best of our knowledge, this paper brings the following contributions. • We propose novel algorithms to generate more natural adversarial examples that both preserve the semantics and mislead the classifiers. • We conduct comprehensive experiments to 1521 examine the robustness of RNN, Transformer, and BERT. Our results show that both self-attentive models, whether pre-trained or not, are more robust than recurrent models. • We provide theoretical explanations to support the statement that self-attentive structures are more robust to small adversarial perturbations. 2 Target Neural Models This section describes the target neural architectures, LSTM and self-attentive models, and how to adapt these models for the downstream tasks: sentiment analysis, entailment and translation. 2.1 LSTM For classification tasks including sentiment analysis and entailment detection, we use a Bidirectional LSTM with an attention (Hochreiter and Schmidhuber, 1997; Bahdanau et al., 2014) layer as the sentence encoder, and a fully connected layer for classification problems. For machine translation, we employ a common seq2seq model (Sutskever et al., 2014), in which both the encoder and decoder are a 2-layer stacked BiLSTM with 512 hidden units. 2.2 Self-Attentive Models Self-attentive models are further distinguished into BERT and Transformers. The classification problems adopt the BERT model with an identical setup to the original paper (Devlin et al., 2019), in which BERT is used as an encoder that represents a sentence as a vector. This vector is then used by a fully connected neural network for classification. Note that models are tuned separately for each task. We also experiment with a smaller BERT model without pre-training, denoted as BERTNOPT, in order to isolate the impact of pre-training. Due to the limited size of the training data, we only incorporate three layers of self-attention in the smaller model. To the best of our knowledge, there is no prior work that uses pre-trained BERT for machine translation. Thus, the Transformer model is employed for neural machine translation task. 3 Attack Methods In this section, we provide five methods to generate adversarial examples (or called “attacks”). The goal of an attack is to find and replace one word in the original input sentence, turning the output label (or sequence) from the model to be incorrect. The first method is based on random word replacement, which serves as the baseline. The second (list-based) and third (greedy) methods are adapted from prior arts. The fourth (constrained greedy) and fifth (attention-based) are proposed by us. We also describe the evaluation metrics. 3.1 Random Attack This attack randomly replaces one word in the input sentence with another word from the vocabulary. We repeat this process by 105 times and calculate the average as the final performance. This baseline is denoted as RANDOM. 3.2 List-based Attack The second method is recently proposed by Alzantot et al. (2018), denoted as LIST. LIST employs a list of semantically similar words (i.e., synonyms), and manages to replace a word in the input sentence with another from the list to construct adversarial examples. In other words, the list is used to replace a word with one of its synonyms; this process is repeated for every word in the input sentence until the target model makes an incorrect prediction. That is, for every sentence, we start by replacing the first word with its synonyms, each forming a new adversarial example. If none of these successfully misleads the model, we move to the next word (and the first word remains unchanged), and repeat this process until either the attack succeeds or all words have been tried. 3.3 Greedy Select + Greedy Replace The third method (denoted as GS-GR) greedily searches for the weak spot of the input sentence (Yang et al., 2018) by replacing each word, one at a time, with a “padding” (a zero-valued vector) and examining the changes of output probability. After determining the weak spot, GS-GR then replaces that word with a randomly selected word in the vocabulary to form an attack. This process is repeated until the attack succeeds or all words in the vocabulary are exhausted. 3.4 Greedy Select + Embedding Constraint Although the GS-GR method potentially achieves a high success rate, the adversarial examples formed by GS-GR are usually unnatural; sometimes GS-GR completely changes the semantics of 1522 the original sentence by replacing the most important word with its antonym, for example: changing “this is a good restaurant” into “this is a bad restaurant.” This cannot be treated as a successful attack, since humans will notice the change and agree with the model’s output. This is because GS-GR only considers the classification loss when finding the replacement word, and largely ignore the actual semantics of the input sentence. To resolve this issue, we propose to add a constraint on sentence-level (not word-level) embedding: the attack must find a word with the minimum L1 distance between two embeddings (from the sentences before and after the word change) as the replacement. This distance constraint requires a replacement word not to alter the sentence-level semantics too much. This method is denoted as GS-EC. In the experimental results, we show that the GS-EC method achieves a similar success rate as GS-GR in misleading the model, while being able to generate more natural and semanticallyconsistent adversarial sentences. 3.5 Attention-based Select We conjecture that self-attentive models rely heavily on attention scores, and changing the word with the highest or lowest attention score could substantially undermine the model’s prediction. Therefore, this attack method exploits and also investigates the attention scores as a potential source of vulnerability. This method first obtains the attention scores and then identifies a target word that has the highest or lowest score. Target word is then replaced by a random word in the vocabulary, and this process is repeated until the model is misled by the generated adversarial example. These methods are denoted as ASMIN-GR that replaces the word with the lowest score, and ASMAX-GR with the highest score. Furthermore, the constraint on the embedding distance can also be imposed here for finding semantically similar adversarial examples; these methods are referred as ASMIN-EC and ASMAXEC, respectively. As a pilot study, we examine the attention scores on the first and last layers of the BERT model for understanding the model’s behavior under attacks. 3.6 Evaluation Criteria We evaluate the robustness of the classification models (for sentiment analysis and entailment) by the following three criteria: (a) the success rate of the attacks misleading the model, (b) readability, and (c) human accuracy. Both readability and human accuracy are evaluated qualitatively by human raters. Readability measures the relative naturalness of the adversarial examples generated by different attack methods. For example, if 100 raters determine that the adversary generated by method A is more readable than method B, and 40 raters think otherwise, the relative readability scores of methods A and B will be 1 and 0.4, respectively. And human accuracy is the percentage that human judgment of these examples remains identical to the ground-truth label. In order to evaluate the models and at the same time keep reasonable execution time, we randomly select 100 samples from the test set that all models answer correctly to perform attacks. For the experiments on machine translation task, we evaluate the attack success rate and BLEU scores (Papineni et al., 2002) for 200 sentence pairs in the WMT 17 Task (Bojar et al., 2017). 4 Experiment I: Sentiment Analysis We first evaluate the robustness of LSTM, BERT, and BERTNOPT on binary sentiment analysis using the Yelp dataset (Zhang et al., 2015). Models under attack have accuracies of 93.7%, 87.3% and 90.7% for fine-tuned BERT model, BERTNOPT and LSTM, respectively, on the test set. Note that for attention-based attacks (i.e., ASMIN-GR, ASMAXGR, ASMIN-EC, and ASMAX-EC), the average of the first (i.e., the one that is closest to the model input) attention layer from all 12 heads in BERT and BERTNOPT are used for our attacks.1 4.1 Results To illustrate how adversarial attacks work, Fig. 1 shows the results from ASMAX-EC and ASMIN-EC methods that select a word to change based on the attention scores of the original sentence. A comprehensive quantitative comparison can be found in Table 1, from which we make the following observations: • Greedy-based attacks consistently achieve higher successful rate than other attacks. The proposed GS-EC method can achieve almost identical success rates with GS-GR while restricting the search space based on the embedding distances. We will further show that 1As an alternative, we tested using the last layer during ASMAX-ECattack. However, experimental results exhibit a < 10% success rate. 1523 Figure 1: Illustrations of attention scores of (a) the original input, (b) ASMIN-EC, and (c) ASMAX-EC attacks. The attention-based methods select words based on the maximum or minimum attention, which is annotated by red boxes. Both of them reversed the predicted sentiment of the sentence from positive to negative. Model Attack Method LSTM BERT BERTNOPT RANDOM 1.1% 0.8% 1% LIST 27% 6% 15% ASMIN-GR 16% 11% 32% ASMAX-GR 62% 17% 35% ASMIN-EC 16% 10% 32% ASMAX-EC 62% 17% 35% Best attention attack(A∗) 62% 17% 35% GS-GR 79% 52% 53% GS-EC 78% 50% 53% Table 1: Success rates of attack methods across models for sentiment analysis. Bold numbers indicate the highest attack rate in a column. GS-EC leads to higher quality adversarial examples in Section 4.2. • We found that using attention, especially ASMAX methods, can easily break the LSTM model. However, the same vulnerability does not exist in BERT or BERTNOPT models. Since different types of attention-based attacks are suitable for different models, we summarize the best attention-based attack performance as A∗in the table, which takes the maximum over four different types of attention-based attacks. • Self-attentive models (BERT and BERTNOPT) consistently lead to lower attack successful rates compared with the LSTM model, under RANDOM, LIST, attention-based attacks and greedy-based attacks. We demonstrate the robustness of BERT model under GS-EC attack in Fig 2. We can see that, GS-EC caused a substantial shift in the LSTM’s attention map while that of BERT remain stable. I truly enjoyed my visit here 0.061 0.093 0.528 0.130 0.090 0.055 Original, Pred: P I truly enjoyed cello visit here 0.058 0.064 0.309 0.059 0.267 0.125 Adversarial, Pred: N (a) LSTM I truly enjoyed my visit here 0.066 0.228 0.126 0.067 0.118 0.109 Original, Pred: P I truly jammed my visit here 0.061 0.238 0.133 0.067 0.118 0.109 Adversarial, Pred: N (b) BERT Figure 2: Attention scores in (a) LSTM and (b) BERT models under GS-EC attacks. Although GS-EC successfully flips the predicted sentiment for both models from positive to negative, the attention scores remain stable for BERT model. The LSTM model, however, suffers from a large shift in attention distribution. Method Sentence GS-GR Pizzeria Bianco was a such never a nice treat that was [...] GS-EC Pizzeria Bianco was a such ostensibly a nice treat that was [...] GS-GR The desserts here are absolutely great 0 ! [...] GS-EC The desserts here are absolutely great soluble ! [...] Table 2: Adversarial examples for the BERT sentiment analysis model generated by GS-GR and GS-EC methods. Both attacks caused the prediction of the model to change. Note here that GS-EC model selects a word that preserves local coherency due to the similarity constraints. GS-GR model, on the contrary, finds a word that is less coherent with the context. 4.2 Quality of Adversarial Examples We conduct experiments to assess the naturalness of adversarial examples. First, Table 2 compares the quality of the results generated by GS-GR and GS-EC attacks on a BERT model. Here we see that constraints imposed by GS-EC make it superior than GS-GR in terms of retrieving words that are coherent with the context. Furthermore, we organize a large-scale human evaluation on Amazon Mechanical Turk regarding the qualities of adversarial examples generated by different methods. Each sample is voted by 3 turkers. Recall that we define “Readability” and “Human accuracy” in Section 3.6. Readability is regarded as the relative naturalness of the adver1524 sarial examples, normalized to the maximum between the compared methods. The human accuracy metric is the percentage of human responses that matches the true label. Table 3 is a comparison of LSTM and BERT models using the GS-EC attack. It shows that the distance in embeddings space of BERT can better reflect semantic similarity and contribute to more natural adversarial examples. And, in Table 4, we compare using GSGR and GS-EC method on BERT model. Again, we see that the GS-EC method, which restricts the distance between sentence embeddings of original and adversarial inputs, can produce superior adversaries. Model Readability Human Accuracy LSTM 0.6 52.1% BERT 1.0 68.8% Table 3: Comparison of LSTM and BERT models under human evaluations against GS-EC attack. Readability is a relative quality score between models, and Human Accuracy is the percentage that human raters correctly identify the adversarial examples. Method Readability Human Accuracy GS-GR 0.55 64.6% GS-EC 1.0 68.8% Table 4: Comparison of GS-GR and GS-EC attacks on BERT model for sentiment analysis. Readability is a relative quality score between attack methods, and Human Accuracy is the percentage that human raters correctly identify the sentiment of adversarial examples. 5 Experiment II: Textual Entailment We conduct evaluations on MultiNLI (Williams et al., 2018) dataset for textual entailment with approaches similar to the ones in the last section. MultiNLI is one of the many datasets that see major improvements by BERT. The BERT model is trained to achieve 83.5% accuracy and LSTM 76%. BERTNOPT is excluded from this experiment since it cannot reach a satisfactory accuracy. 5.1 Results Results from entailment models fall into the same pattern as those from sentiment analysis, which is listed in Table 5. Our findings are summarized as follows: • The entailment task is more difficult than single-sentence classification, as evidenced by the higher success rates of attacks among all models and attacks. • The greedy-based attacks consistently achieve higher success rates. • ASMAX methods continue to be superior than ASMIN, although the difference here is not as drastic as in the previous experiment. • BERT model remains more robust compared with LSTM. Model Attack Method LSTM BERT RANDOM 17.8% 9.2% LIST 63% 56% ASMIN-GR 57% 53% ASMAX-GR 78% 54% ASMIN-EC 55% 52% ASMAX-EC 78% 51% Best attention attack(A∗) 78% 54% GS-GR 95% 75% GS-EC 95% 75% Table 5: Success rate of different attack methods on LSTM and BERT for the MultiNLI development set. 5.2 Quality of Adversarial Examples Samples illustrated in Table 6 show that the GSEC method can find more coherent words for the attack, as opposed to GS-GR. For instance, changing the word “great” to “vast” can cause the model to misjudge the entailment relation in the second example. Unfortunately due to budget constraints, we did not conduct large scale human experiments on this dataset. 6 Experiment III: Machine Translation We implement LSTM and Transformer machine translation models using OpenNMT-py2. Specifically, for the LSTM model, we train it with 453 thousand pairs from the Europarl corpus of German-English WMT 15 Task3, common crawl, and news-commentary. The LSTM model is a two-layer bidirectional LSTM with 512 hidden units together with a attention layer. We use the default hyper-parameters, and reproduce the performance reported by Ha et al. (2016). For the Transformer, we use a public pre-trained model with 6 self-attention layers provided by OpenNMT-py that reproduces the performance reported by Vaswani et al. (2017). 2https://github.com/OpenNMT/OpenNMT-py 3http://www.statmt.org/wmt15/translation-task.html 1525 Label Sentence 1 Sentence 2 Contradiction No, I don’t know. (Original)Yes , I know. →Neutral (GS-GR) Yes, I 0. (GS-EC) Yes, I renovated. Neutral →Contradiction That’s it. The girl looked at him, then passed her hand across her forehead. (Original)The girl looked at him with great interest. (GS-GR) The girl looked at him with ! interest. (GS-EC) The girl looked at him with vast interest. Entailment →Neutral (Original)Workers are also represented in civil rights and retaliation claims. Some workers are represented in civil rights and retaliation claims. (GS-GR) Workers are also represented in civil rights and ? claims. (GS-EC) Workers are also represented in civil rights and targets claims. Table 6: Adversarial examples generated by GS and GS-EC attacks for BERT entailment classifier. Unlike the classification tasks, in machine translation the attack goal is harder to define. We chose to evaluate the robustness under two types of attacks. In the first type of “targeted keyword attack” discussed in (Cheng et al., 2018), we attempt to generate an adversarial input sequence such that a specific keyword appears in the output sequence within the threshold ∆of number of word changes we allowed. Empirically, we set ∆= 3 in these experiments and adopt the most successful attack, GS-EC, to this case. For the second type of untargeted attack, we consider perturbing the input to degrade the BLUE score of output sequences with respect to the ground-truths. For doing this, we conduct a typo-based attack (Belinkov and Bisk, 2018). Specifically, we randomly select one word in each sentence and change it to a typo predefined in a common typo list. This can be viewed as an extension of LIST attack to the translation task. 6.1 Results For the targeted keyword attack, the success rates on both models are reported in Table 7. First, we notice that the success rate of the attacks are below 30%, presumably because translation is substantially more complex compared with the aforementioned text classification tasks. Nevertheless, the attacks on the Transformer model is significantly less successful than the LSTM-based one. For the typo-based attack, the BLUE scores before/after the attack are reported in Table 8. We observe that the Transformer-based model always achieves a higher BLEU score over LSTM-based model, i.e., have a better translation performance whether the sentences contain typos or not. We conclude that Transformer-based model exhibits a greater robustness over LSTM-based model in (a) LSTM (b) Transformer Figure 3: Compare attention scores of the original versus adversarial inputs for LSTM and Transformer models for machine translation. the case of machine translation. This is consistent with our findings in the previous experiments on sentiment and entailment classification problems. In addition, we present some successful adversarial examples in Table 9, and see that the greedy attack can indeed generate natural examples for both models. Attack Method LSTM Transformer GS-EC 27.5% 10.5% Table 7: Targeted attack success rate with GS-EC in translation tasks. Model Original Adversarial LSTM 25.10 13.44 Transformer 34.90 26.02 Table 8: BLEU scores using typo-based attack on LSTM and Transformer translation models. 7 Theoretical Analysis All the above experiments conclude that a selfattentive model exhibits higher robustness compared to a recurrent one. This is somewhat counter-intuitive—at the first glance one may assume that the self-attention layer is not robust 1526 Original input There is a fundamental philosophical reason for the differences between Donald Trump’s and Hillary Clinton’s [...] LSTM Adv input There is a fundamental philosophical r for the differences between Donald Trump’s and Hillary Clinton’s [...] Original output Es gibt einen grundlegenden philosophischen Grund für die Unterschiede zwischen Donald Trump und Hillary Clinton s Adv output Es gibt eine grundlegende philosophischer Art , wie Unterschied e zwischen Donald Trump und Hillary Clinton s TF Original input And in this vein , he passed the prize money of 2 5,000 euros on straight away Adv input And as this vein , he passed the prize money of 2 5,000 euros on straight away Original output Und in diesem Sinne hat er sofort das Preis geld von 2 5.000 Euro über wiesen Adv output Und als diese Art , ging er sofort das Preis geld von 2 5.000 Euro weiter Table 9: Adversarial examples for LSTM and Transformer (shortened as TF) models with the target keyword “Art.” in the output. since perturbation in one word can affect all the attention scores. In this section, we provide some explanation regarding this phenomenon by studying how error propagates through the self-attention architecture. We show that the perturbation of one input embedding can in fact only have sparse affect to the attention scores when the input embedding are scattered enough in the space. Sensitivity of Self-Attention Layers : First, we consider the simple case of one self-attention layer with a single head. Assume a sentence has n input words and each word is represented by a d-dimensional embedding vectors, denoted by x1, . . . , xn ∈Rd. We use W Q, W K, W V ∈Rd×k to denote the query, key and value transformations. The contribution of each element j to i is then computed by sij = xT i W Q(W K)T xj, and then the i-th embedding at the next layer is obtained by zi = X j esij P k esik (W V xj), Sometimes zi is fed into another linear layer to obtain the embeddings. Now, consider that a small perturbation is added to a particular index ¯j, such that x¯j is changed to x¯j + ∆x while all the other {xj | j ̸= ¯j} remain unchanged. We then study how much this perturbation will affect {zi}i∈[n]. For a particular i (̸= j), the sij is only changed by one term since s′ ij = ( sij if j ̸= ¯j sij + xT i W Q(W K)T ∆x if j = ¯j (1) where we use s′ ij to denote the value after the perturbation. Therefore, with the perturbed input, each set of {sij}n j=1 will only have one term being changed. Furthermore, the changed term in equation 1 is the inner product between xi and a fixed vector W Q(W K)T ∆x; although this could be large for some particular xi in the similar direction of W Q(W K)T ∆x, if the embeddings {xi}n i=1 are scattered enough over the space, the inner products cannot be large for all {xi}n i=1. Therefore, the change to the next layer will be sparse. For instance, we can prove the sparsity under some distributional assumptions on {xi}: Theorem 1. Assume ∥∆x∥≤δ and {xi}n i=1 are d-dimensional vectors uniformly distributed on the unit sphere, then E[|s′ i¯j −si¯j|] ≤Cδ √ d with C = ∥W Q∥∥W K∥and P(|s′ i¯j −si¯j| ≥ϵ) ≤Cδ ϵ √ d. Proof. The value E[s′ i¯j −si¯j] = E[xT i z] where z = W Q(W K)T ∆x is a fixed vector, and it is easy to derive ∥z∥≤∥W Q∥∥W K∥δ. To bound this expectation, we first try to bound a1 = E[xT i e1] where e1 = [1, 0, . . . , 0]. Due to the rotation invariance we can obtain a1 = · · · = ad and P i a2 i = 1, so |a1| = 1 √ d. This implies E[xT i z] ≤ Cδ √ d. Using Markov inequality, we can then find the probability results. Therefore, as the norm of W Q, W K are not too large (usually regularized by L2 during training) and the dimension d is large enough, there will be a significant amount of i such that sij is perturbed negligibly. In contrast, embeddings from RNN-based models are relatively more sensitive to perturbation of one word, as shown below. Similar to the previous case, we assume a sequence x1, . . . , xn, and a word x¯j is perturbed by ∆x. For the vanilla RNN model, the embeddings are sequentially computed as zi = σ(Axi + Bzi−1). If x¯j is perturbed, then all the {zi}n i=¯j will be altered. Therefore, the at1527 tacker can more easily influence all the embeddings. As an illustration of the proposed theory, we plot a comparison of the degree of embeddings variation from two models after changing one word in Fig. 4. We observe that, for self-attentive models, the distribution of change on embeddings is sparse after going through the first self-attention layer (layer 1) and then gradually propagate to the whole sequence when passing through more layers. In contrast, the embeddings from LSTM exhibit a denser pattern. To further validate our analysis, we calculate the ratio of the L2 norms of embeddings variation. Specifically, let z and zadv denote the embeddings of the original sentence and adversarial input, respectively. We represent relative embedding variation Re = ∥z −zadv∥/∥z∥. For the GS-EC attack in the sentiment analysis task, embeddings from the LSTM model has an average Re of 0.83 whereas for the BERT model it is 0.56 under the same attack by changing one word. This supports our claim that the impact of an adversarial example is more severe on the LSTM model than BERT, which presumably plays an important role in the robustness of self-attentive models. (a) LSTM (b) BERT Figure 4: Comparison of L2 norm of embedding variations after changing one word (marked by red box) in the input to (a) LSTM (b) BERT. 8 Related Work Robustness of neural network models has been a prominent research topic since Szegedy et al. (2013) discovered that CNN-based image classification models are vulnerable to adversarial examples. However, attempts to examine the robustness of NLP models are relatively few and far between. Previous work on attacking neural NLP models include using Fast Gradient Sign Method (Goodfellow et al., 2015) to perturb the embedding of RNN-based classifiers (Papernot et al., 2016; Liang et al., 2017), but they have difficulties mapping from continuous embedding space to discrete input space. Ebrahimi et al. (2018) propose the ‘HotFilp’ method that replaces the word or character with the largest difference in the Jacobian matrix. Li et al. (2016) employ reinforcement learning to find the optimal words to delete in order to fool the classifier. More recently, Yang et al. (2018) propose a greedy method to construct adversarial examples by solving a discrete optimization problem. They show superior performance than previous work in terms of attack success rate, but the greedy edits usually degrade the readability or significantly change the semantics. Zhao et al. (2018) utilize generative adversarial networks (GAN) to generate adversarial attacks against black-box models for applications including image classification, textual entailment, and machine translation. Alzantot et al. (2018) propose to use a pre-compiled list of semantically similar words to alleviate this issue, but leads to lower successful rate as shown in our experiments. We thus include the latest greedy and list-based approaches in our comparisons. In addition, the concept of adversarial attacks has also been explored in more complex NLP tasks. For example, Jia and Liang (2017) attempt to craft adversarial input to a question answering system by inserting irrelevant sentences at the end of a paragraph. Cheng et al. (2018) develop an algorithm for attacking seq2seq models with specific constraints on the content of the adversarial examples. Belinkov and Bisk (2018) compare typos and artificial noise as adversarial input to machine translation models. Also, Iyyer et al. (2018) propose a paraphrase generator model learned from back-translation data to generate legitimate paraphrases of a sentence as adversaries. However, the semantic similarity is not guaranteed. In terms of comparisons between LSTM and Transformers, Tang et al. (2018) show that multiheaded attention is a critical factor in Transformer when learning long distance linguistic relations. This work is unique in a number of aspects. First, we examine the robustness of uni- and bidirectional self-attentive model as compared to recurrent neural networks. And, we devise novel attack methods that take advantage of the embedding distance to maximize semantic similarity between real and adversarial examples. Last but not least, we provide detail observations of the inter1528 nal variations of different models under attack and theoretical analysis regarding their levels of robustness. 9 Conclusions We show that self-attentive models are more robust to adversarial attacks than recurrent networks under small input perturbations on three NLP tasks, i.e., sentiment analysis, entailment, and translation. We provide theoretical explanations regarding why the self-attention structure leads to better robustness, in addition to illustrative examples that visualize the model’s internal variations. Future work includes developing a adversarial training scheme as well as devising a more robust architecture based on our findings. Acknowledgements We are grateful for the insightful comments from anonymous reviewers. This work is supported by the Ministry of Science and Technology of Taiwan under grant numbers 107-2917-I-004-001, 108-2634-F-001-005. The author Yu-Lun Hsieh wishes to acknowledge, with thanks, the Taiwan International Graduate Program (TIGP) of Academia Sinica for financial support towards attending this conference. We also acknowledge the support from NSF via IIS1719097, Intel and Google Cloud. References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890–2896. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In International Conference on Learning Representations. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (wmt17). In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, pages 169–214, Copenhagen, Denmark. Association for Computational Linguistics. Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. 2018. Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples. CoRR, abs/1803.01128. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. Hotflip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 31–36. Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In International Conference on Learning Representations. Thanh-Le Ha, Jan Niehues, and Alexander Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. arXiv preprint arXiv:1611.04798. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Lstm can solve hard long time lag problems. In Advances in neural information processing systems, pages 473–479. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1875– 1885. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220. Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2017. Deep text classification can be fooled. arXiv preprint arXiv:1704.08006. 1529 Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. 2016. Crafting adversarial input sequences for recurrent neural networks. In Military Communications Conference, MILCOM 2016-2016 IEEE, pages 49–54. IEEE. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Gongbo Tang, Mathias Müller, Annette Rios, and Rico Sennrich. 2018. Why self-attention? a targeted evaluation of neural machine translation architectures. In Conference on Empirical Methods in Natural Language Processing, pages 4263–4272. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, and Michael I Jordan. 2018. Greedy attack and gumbel attack: Generating adversarial examples for discrete data. arXiv preprint arXiv:1805.12316. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 649–657. Curran Associates, Inc. Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018. Generating natural adversarial examples. In International Conference on Learning Representations.
2019
147
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1530–1537 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1530 Exact Hard Monotonic Attention for Character-Level Transduction Shijie Wu@ and Ryan Cotterell@,H @Department of Computer Science, Johns Hopkins University HDepartment of Computer Science and Technology, University of Cambridge [email protected], [email protected] Abstract Many common character-level, string-tostring transduction tasks, e.g. graphemeto-phoneme conversion and morphological inflection, consist almost exclusively of monotonic transduction. Neural sequence-tosequence models with soft attention, which are non-monotonic, often outperform popular monotonic models. In this work, we ask the following question: Is monotonicity really a helpful inductive bias in these tasks? We develop a hard attention sequence-to-sequence model that enforces strict monotonicity and learns a latent alignment jointly while learning to transduce. With the help of dynamic programming, we are able to compute the exact marginalization over all monotonic alignments. Our models achieve state-of-the-art performance on morphological inflection. Furthermore, we find strong performance on two other character-level transduction tasks. Code is available at https://github.com/ shijie-wu/neural-transducer. 1 Introduction Many tasks in natural language can be treated as character-level, string-to-string transduction. The current dominant method is the neural sequenceto-sequence model with soft attention (Bahdanau et al., 2015; Luong et al., 2015). This method has achieved state-of-the-art results in a plethora of tasks, for example, grapheme-to-phoneme conversion (Yao and Zweig, 2015), named-entity transliteration (Rosca and Breuel, 2016) and morphological inflection generation (Cotterell et al., 2016). While soft attention is very similar to a traditional alignment between the source characters and target characters in some regards, it does not explicitly a distribution over alignments. On the other hand, neural sequence-to-sequence models with hard alignment (Xu et al., 2015; Wu et al., 2018) are analogous to the latent alignment in the classic IBM models for machine translation, which do model the alignment distribution explicitly (Brown et al., 1993). The standard versions of both soft and hard attention are non-monotonic. However, if we look at the data in grapheme-to-phoneme conversion, named-entity transliteration, and morphological inflection—examples are shown in Fig. 1—we see that the tasks require almost exclusively monotonic transduction. Yet, counterintuitively, the state of the art in high resource morphological inflection is held by non-monotonic models (Cotterell et al., 2017)! Indeed, in a recent controlled experiment, Wu et al. (2018) found non-monotonic models (with either soft attention or hard alignment) outperform popular monotonic models (Aharoni and Goldberg, 2017) in the three above mentioned tasks. However, the inductive bias of monotonicity, if correct, should help learn a better model or, at least, learn the same model. In this paper, we hypothesize that the underperformance of monotonic models stems from the lack of joint training of the alignments with the transduction. Generalizing the model of Wu et al. (2018) to enforce monotonic alignments, we show that, for all three tasks considered, monotonicity is a good inductive bias and jointly learning a monotonic alignment improves performance. We provide an exact, cubic-time, dynamic-programming inference algorithm to compute the log-likelihood and an approximate greedy decoding scheme. Empirically, our results indicate that, rather than the pipeline systems of Aharoni and Goldberg (2017) and Makarov et al. (2017), we should jointly train monotonic alignments with the transduction model, and, indeed, we set the single model state of the art on the task of morphological inflection.1 1The state of the art for morphological inflection is held by ensemble systems, much like parsing and other structured 1531 l i p u k e Morphological Inflection l i p u k k e e l l e Transliteration A A C H E N   Grapheme-to-phoneme a c t i o n AE K SH AH N Task Source Target N AT+ALL SG Tag Figure 1: Example of source and target string for each task. Tag guides transduction in morphological inflection. 2 Hard Attention 2.1 Preliminary We assume the source string x ∈Σ∗ x and the target string y ∈Σ∗ y have finite vocabularies Σx = {x1, . . . , x|Σx|} and Σy = {y1, . . . , y|Σy|}, respectively. In tasks where the tag is provided, i.e., labeled transduction (Zhou and Neubig, 2017), we denote the tag as an ordered set t ∈Σ∗ t with a finite tag vocabulary Σt = {t1, . . . , t|Σt|}. We define the set A = {1, . . . , |x|}|y| to be set of all alignments from x to y where an alignment aligns each target character yi to exactly one source character in x. In other words, it allows zero-to-one2 or many-to-one alignments between x and y. For an a ∈A, ai = j refers to the event that yi is aligned to xj, the ith character of y and the jth character of x. 2.2 0th-order Hard Attention Hard attention was first introduced to the literature by Xu et al. (2015). We, however, follow Wu et al. (2018) and use a tractable variant of hard attention and model the probability of a target string y given an input string x as the following: p(y | x) = X a∈A(x,y) p(y, a | x) = X a∈A |y| Y i=1 p(yi | ai, y<i, x) p(ai | y<i, x) | {z } exponential number of terms = |y| Y i=1 |x| X ai=1 p(yi | ai, y<i, x) p(ai | y<i, x) | {z } polynomial number of terms (1) where we show how one can rearrange the terms to compute the function in polynomial time. prediction tasks. We present the new best individual system. 2Zero in the sense of non-character like BOS or EOS The model above is exactly an 0th-order neuralized hidden Markov model (HMM). Specifically, p(yi | ai, y<i, x) can be regarded as an emission distribution and p(ai | y<i, x) can be regarded as a transition distribution, which does not condition on the previous alignment. Hence, we will refer to this model as 0th-order hard attention. The likelihood can be computed in O(|x| · |y| · |Σy|) time. 2.3 1st-order Hard Attention To enforce monotonicity, hard attention with conditionally independent alignment decisions is not enough: The model needs to know the previous alignment position when determining the current alignment position. Thus, we allow the transition distribution to condition on previous one alignment p(ai | ai−1, y<i, x) and it becomes a 1st-order neuralized HMM. We display this model as a graphical model in Fig. 2. We will refer to it as 1st-order hard attention. Generalizing the 0th-order model, we define 1st-order extension as: p(y | x) = X a∈A(x,y) p(y, a | x) = X a∈A |y| Y i=1 p(yi | ai, y<i, x) p(ai | ai−1, y<i, x) | {z } exponential number of terms = |y| Y i=1 |x| X ai−1=1 |x| X ai=1 p(yi | ai) p(ai | ai−1)α(ai−1) | {z } polynomial number of terms (2) where α(ai−1) is the forward probability, calculated using the forward algorithm (Rabiner, 1989) with α(a0, y0) = 1, and p(a1 | a0) = p(a1 | <BOS>, x) is the initial alignment distribution. For simplicity, we drop y<i and x in p(yi | ai) and p(ai | ai−1). For completeness, we include the 1532 recursive definition of the forward probability: α(ai) = p(yi | ai) |x| X ai−1=1 p(ai | ai−1) α(ai−1) α(a1) = p(y1 | a1) p(a1 | a0) α(a0) Thus, computation of the likelihood in our 1st-order hard attention model is O(|x|2 · |y| · |Σy|). Decoding at test time, however, is hard and we resort to a greedy scheme, described in Alg. 1. To see why it is hard, note that the dependence on y<i means that we have a neural language model scoring the target string as it is being transduced. Because the dependence is unbounded, there will be no dynamic program that allows for efficient computation. 3 A Neural Parameterization with Enforced Monotonicity The goal of this section is to take the 1st-order model of §2 and show how we can straightforwardly enforce monotonic alignments. We will achieve this by adding structural zeros to the distribution, which will still allow us to perform efficient inference with dynamic programming. We follow the neural parameterization of Wu et al. (2018). The source string x is represented by a sequence of character embeddings vectors, which are fed into an encoder bidirectional LSTM (Hochreiter and Schmidhuber, 1997) to produce hidden state representations he j. The emission distribution p(yi | ai, y<i, x) depends on these encodings he j and the decoder hidden states hd i, produced by hd i = LSTM([ed(yi−1); ht], hd i−1) where ed encodes target characters into character embeddings. The tag embedding ht is produced by ht = ReLU(Y [et(t1); . . . ; et(t|Σt|)]) where et maps the tag tk into tag embedding ht k ∈ Rdt or zero vector 0 ∈Rdt, depends on whether the tag tk is presented. Note that Y ∈Rdt×|Σt| dt is a learned parameter. Also he j ∈R2dh, hd i ∈Rdh and ht ∈Rdt are hidden states. The Emission Distributon. All of our hardattention models employ the same emission distribution parameterization, which we define below p(yi | ai, y<i, x) = softmax W f(hd i, he ai)  f(hd i, he ai) = tanh V [hd i; he ai]  x a1 a2 a3 a4 hd 1 hd 2 hd 3 hd 4 y1 y2 y3 y4 Figure 2: Our monotonic hard-attention model viewed as a graphical model. The circular nodes are random variables and the diamond nodes deterministic variables. We have omitted arcs from x to y1, y2, y3 and y4 for clarity (to avoid crossing arcs). where V ∈R3dh×3dh and W ∈R|Σy|×3dh are learned parameters. 0th-order Hard Attention. In the case of the 0thorder model, the distribution is computed by a bilinear attention function with eq. (1) p(ai = j | y<i, x) = exp(hd i ⊤T he j) P|x| j′=1 exp(hd i ⊤T he j′) where T ∈Rdh×2dh is a learned parameter. 0th-order Hard Monotonic Attention. We may enforce string monotonicity by zeroing out any non-monotonic alignment without adding any additional parameters, which can be done through adding structural zeros to the distribution as follows p(ai = j |ai−1 = j′, y<i, x) = 1{j ≥j′} exp(hd i ⊤T he j) P|x| j′=1 1{j ≥j′} exp(hd i ⊤T he j′) These structural zeros prevent the alignments from jumping backwards during transduction and, thus, enforce monotonicity. The parameterization is identical to the 0th-order model up to the enforcement of the hard constraint with eq. (2). 1st-order Hard Monotonic Attention. We may also generalize the 0th-order case by adding more parameters. This will equip the model with a more expressive transition function. In this case, we take 1533 Algorithm 1 Greedy decoding. (N is the maximum length of target string.) 1: for i = 1, · · · , N do 2: if i = 1 then 3: y∗ i = argmaxyi P|x| ai=1 p(yi | ai)p(ai | ai−1) α(a0) ▷Greedy decoding 4: α(a1) = p(y∗ 1 | a1) p(a1 | a0) α(a0) ▷Forward probability 5: else 6: y∗ i = argmaxyi P|x| ai=1 p(yi | ai) P|x| ai−1=1 p(ai | ai−1) α(ai−1) ▷Greedy decoding 7: α(ai) = p(y∗ i | ai) P|x| ai−1=1 p(ai | ai−1) α(ai−1) ▷Forward probability 8: if y∗ i = EOS then 9: return y∗ the 1st-order hard attention to be an offset-based transition distribution similar to Wang et al. (2018): p(ai | ai−1, y<i, x) = ( softmax(U[hd i; T he ai−1])) 0 ≤∆≤w 0 otherwise where ∆= ai −ai−1 is relative distance to previous attention position and U ∈R(w+1)×2dh, a learned parameter. Note that, as before, we also enforce monotonicity as a hard constraint in this parameterization. 4 Related Work There have been previous attempts to look at monotonicity in neural transduction. Graves (2012) first introduced the monotonic neural transducer for speech recognition. Building on this, Yu et al. (2016) proposes using a separated shift/emit transition distribution to allow more expressive model. Like us, they also consider morphological inflection and outperform a (weaker) soft attention baseline. Rastogi et al. (2016) offer a neural parameterization of a finite-state transducer, which implicitly encodes monotonic alignments. Instead of learning the alignments directly, Aharoni and Goldberg (2017) take the monotonic alignments from an external model (Sudoh et al., 2013) and train the neural model with these alignments. In followup work, Makarov et al. (2017) show this twostage approach to be effective, winning the CoNLLSIGMORPHON 2017 shared task on morphological inflection (Cotterell et al., 2017). Raffel et al. (2017) propose a stochastic monotonic transition process to allow sample-based online decoding. 5 Experiments 5.1 Experiments Design Tasks. We consider three character-level transduction tasks: grapheme-to-phoneme conversion (Weide, 1998; Sejnowski and Rosenberg, 1987), named-entity transliteration (Zhang et al., 2015) and morphological inflection in high-esource setting (Cotterell et al., 2017). Empirical Comparison. We compare (i) soft attention without input-feeding (SOFT) (Luong et al., 2015), (ii) 0th-order hard attention (0-HARD) (Wu et al., 2018), (iii) 0th-order monotonic hard attention (0-MONO) and (iv) 1st-order monotonic hard attention (1-MONO). The SOFT, 0-HARD and 0MONO models have an identical number of parameters, but the 1-MONO has more. All of them have approximately 8.6M parameters. Experimental details and hyperparameters may be found in App. A. 5.2 Experimental Findings Finding #1: Morphological Inflection. The first empirical finding in our study is that we achieve single-model, state-of-the-art performance on the CoNLL-SIGMORPHON 2017 shared task dataset. The results are shown in Tab. 2. We find that the 1-MONO ties with the 0-MONO system, indicating the additional parameters do not add much. Both of these monotonic systems surpass the non-monotonic system 0-HARD and SOFT. We also report comparison to other top systems at the task in Tab. 1. The previous state-of-the-art model, Bergmanis et al. (2017), is a non-monotonic system that outperformed the monotonic system of Makarov et al. (2017). However, Makarov et al. (2017) is a pipeline system that took alignments from an existing aligner; such a system has no manner, by which it can recover from poor initial 1534 Morphological Inflection ACC Silfverberg et al. (2017) 93.0 SOFT 93.4 Makarov et al. (2017) 93.9 0-HARD 94.5 Bergmanis et al. (2017) 94.6 Makarov and Clematide (2018) 94.6 0-MONO 94.8 1-MONO 94.8 Table 1: Average dev performance on morphological inflection of our models against single models from the 2017 shared task. All systems are single model, i.e., without ensembling. Why dev? No participants submitted single-model systems for evaluation on test and the best systems were not open-sourced, constraining our comparison. Note we report numbers from their paper.3 alignment. We show that jointly learning monotonic alignments lead to improved results. Finding #2: Effect of Strict Monotonicity. The second finding is that by comparing SOFT, 0-HARD, 0-MONO in Tab. 2, we observe 0-MONO outperforms 0-HARD and 0-HARD in turns outperforms SOFT in all three tasks. This shows that monotonicity should be enforced strictly since strict monotonicity does not hurt the model. We contrast this to the findings of Wu et al. (2018), who found the nonmonotonic models outperform the monotonic ones; this suggests strict monotonicity is more helpful when the model is allowed to learn the alignment distribution jointly. Finding #3: Do Additional Parameters Help? The third finding is that 1-MONO has a more expressive transition distribution and, thus, outperforms 0-MONO and 0-HARD in G2P. However, it performs as well as or worse on the other tasks. This tells us that the additional parameters are not always necessary for improved performance. Rather, it is the hard constraint that matters—not the more expressive distribution. However, we remark that enforcing the monotonic constraint does come at an additional computational cost: an additional factor O(|x|). 6 Conclusion We expand the hard-attention neural sequenceto-sequence model of Wu et al. (2018) to enforce monotonicity. We show, empirically, that enforcing monotonicity in the alignments found by 3Some numbers are obtained by contacting authors. Trans G2P MorInf ACC MFS WER PER ACC MLD SOFT 40.4 0.893 29.3 0.071 92.9 0.157 0-HARD 41.1⋆ 0.894 29.2⋆ 0.070 93.8⋆ 0.126 0-MONO 41.2⋆ 0.895 29.0⋆× 0.072 94.4⋆× 0.113 1-MONO 40.8 0.893 28.2⋆׆ 0.069 94.4⋆× 0.116 Table 2: Average test performance of namded-entity transliteration (Trans), grapheme-to-phoneme conversion (G2P) and morphological inflection (MorInf). First group has exactly same number of parameter while the second group has slightly more parameter. ⋆, × and † indicate statistical significant improvement against SOFT, 0-HARD and 0-MONO on language-level paired permutation test (p < 0.05). hard attention models helps significantly, and we achieve state-of-the-art performance on the morphological inflection using data from the CoNLLSIGMORPHON 2017 shared task. We isolate the effect of monotonicity in a controlled experiment and show monotonicity is a useful hard constraint for three tasks, and speculate previous underperformance is due to a lack of joint training. Acknowledgments The final author acknowledges a Facebook Fellowship. References Roee Aharoni and Yoav Goldberg. 2017. Morphological inflection generation with hard monotonic attention. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2004–2015, Vancouver, Canada. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR), volume abs/1409.0473. Toms Bergmanis, Katharina Kann, Hinrich Schütze, and Sharon Goldwater. 2017. Training data augmentation for low-resource morphological inflection. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, pages 31–39, Vancouver. Association for Computational Linguistics. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. 1535 Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra Kübler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017. The CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages. In Proceedings of the CoNLL-SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, Vancouver, Canada. Association for Computational Linguistics. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared task— morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 10–22. Association for Computational Linguistics. Alex Graves. 2012. Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Neural Computation, 9(8):1735–1780. Diederick P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR). Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Association for Computational Linguistics. Peter Makarov and Simon Clematide. 2018. Imitation learning for neural morphological string transduction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2877–2882. Peter Makarov, Tatiana Ruzsics, and Simon Clematide. 2017. Align and copy: UZH at SIGMORPHON 2017 shared task for morphological reinflection. Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, pages 49–57. Lawrence R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257– 286. Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, and Douglas Eck. 2017. Online and lineartime attention by enforcing monotonic alignments. In International Conference on Machine Learning (ICML), pages 2837–2846. Pushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transductions with neural context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 623–633, San Diego, California. Association for Computational Linguistics. Mihaela Rosca and Thomas Breuel. 2016. Sequenceto-sequence neural network models for transliteration. arXiv preprint arXiv:1610.09565. Terrence J. Sejnowski and Charles R. Rosenberg. 1987. Parallel networks that learn to pronounce english text. Complex Systems, 1. Miikka Silfverberg, Adam Wiemerslage, Ling Liu, and Lingshuang Jack Mao. 2017. Data augmentation for morphological reinflection. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, pages 90–99, Vancouver. Association for Computational Linguistics. Katsuhito Sudoh, Shinsuke Mori, and Masaaki Nagata. 2013. Noise-aware character alignment for bootstrapping statistical machine transliteration from bilingual corpora. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 204–209. Weiyue Wang, Derui Zhu, Tamer Alkhouli, Zixuan Gan, and Hermann Ney. 2018. Neural hidden Markov model for machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 377–382. Association for Computational Linguistics. R.L. Weide. 1998. The Carnegie Mellon pronouncing dictionary. Shijie Wu, Pamela Shapiro, and Ryan Cotterell. 2018. Hard non-monotonic attention for character-level transduction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4425–4438. Association for Computational Linguistics. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd International Conference on Machine Learning, ICML, pages 2048–2057. Kaisheng Yao and Geoffrey Zweig. 2015. Sequenceto-sequence neural net models for grapheme-tophoneme conversion. In INTERSPEECH 2015, pages 3330–3334, Dresden, Germany. Lei Yu, Jan Buys, and Phil Blunsom. 2016. Online segment to segment neural transduction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1307–1316. 1536 Min Zhang, Haizhou Li, Rafael E. Banchs, and A. Kumaran. 2015. Whitepaper of news 2015 shared task on machine transliteration. In NEWS@ACL. Chunting Zhou and Graham Neubig. 2017. Multispace variational encoder-decoders for semisupervised labeled sequence transduction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 310–320, Vancouver, Canada. Association for Computational Linguistics. 1537 A Experimental Details A.1 Tasks. We ask the authors of Wu et al. (2018) for the split data of grapheme-to-phoneme conversion (CMUDict (Weide, 1998) and NetTalk (Sejnowski and Rosenberg, 1987)) and NEWS 2015 shared task on named-entity transliteration. In named-entity transliteration, we only run experiments on 11 language pairs.4 Grapheme-to-Phoneme Conversion is evaluated by word error rate (WER) and phoneme error rate (PER) (Yao and Zweig, 2015), where PER is the edit distance divided by the length of the phonemes. Named-entity transliteration is evaluated by word accuracy (ACC) and mean F-score (MFS) (Zhang et al., 2015). F-score is computed by LCS(c, r) = 1 2(|c| + |r| −ED(c, r)) Ri = LCS(ci, ri) |ri| Pi = LCS(ci, ri) |ci| FSi = 2Ri × Pi Ri + Pi where ri and ci is the i-th reference and prediction and ED(c, r) is the edit distance between c and r. Morphological inflection is evaluated by word accuracy (ACC) and average edit distance (MLD) (Cotterell et al., 2017). A.2 Parameterization. For completeness, we also include the parameterization of soft attention. p(yi | y<i, x) = softmax W f(hd i, ci)  ci = |x| X j=1 αij he j αij = exp(eij) P|x| j=1 exp(eij) eij = hd i ⊤T he j The dimension of character and tag embedding are 200 and 40, respectively. The encoder and decoder LSTM both have 400 hidden dimensions (dh). We also have a 2 layer encoder LSTM. We have 0.4 dropout in embedding and encoder LSTM. 4Ar–En, En–Ba, En–Hi, En–Ja, En–Ka, En–Ko, En–Pe, En–Ta, En–Th, Jn–Jk and Th–En. The w in 1st-order hard monotonic attention model is 4. A.3 Optimization. The model is trained with Adam (Kingma and Ba, 2015) and the learning rate is 0.001. We halve the learning rate whenever the development loglikelihood increase and we stop early when the learning rate reaches 0.00001. We apply gradient clipping with maximum gradient norm 5. The models are selected by development evaluation metric and decoded greedily since no improvements are observed when using beam search (Wu et al., 2018).
2019
148
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1538–1548 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1538 A Lightweight Recurrent Network for Sequence Modeling Biao Zhang1 Rico Sennrich1,2 1School of Informatics, University of Edinburgh [email protected], [email protected] 2Institute of Computational Linguistics, University of Zurich Abstract Recurrent networks have achieved great success on various sequential tasks with the assistance of complex recurrent units, but suffer from severe computational inefficiency due to weak parallelization. One direction to alleviate this issue is to shift heavy computations outside the recurrence. In this paper, we propose a lightweight recurrent network, or LRN. LRN uses input and forget gates to handle long-range dependencies as well as gradient vanishing and explosion, with all parameterrelated calculations factored outside the recurrence. The recurrence in LRN only manipulates the weight assigned to each token, tightly connecting LRN with self-attention networks. We apply LRN as a drop-in replacement of existing recurrent units in several neural sequential models. Extensive experiments on six NLP tasks show that LRN yields the best running efficiency with little or no loss in model performance.1 1 Introduction Various natural language processing (NLP) tasks can be categorized as sequence modeling tasks, where recurrent networks (RNNs) are widely applied and contribute greatly to state-of-the-art neural systems (Yang et al., 2018; Peters et al., 2018; Zhang et al., 2018; Chen et al., 2018; Kim et al., 2019). To avoid the optimization bottleneck caused by gradient vanishing and/or explosion (Bengio et al., 1994), Hochreiter and Schmidhuber (1997) and Cho et al. (2014) develop gate structures to ease information propagation from distant words to the current position. Nevertheless, integrating these traditional gates inevitably increases computational overhead which is accumulated along token positions due to the sequen1Source code is available at https://github.com/ bzhangGo/lrn. tial nature of RNNs. As a result, the weak parallelization of RNNs makes the benefits from improved model capacity expensive in terms of computational efficiency. Recent studies introduce different solutions to this issue. Zhang et al. (2018) introduce the addition-subtraction twin-gated recurrent unit (ATR), which reduces the amount of matrix operations by developing parameter-shared twin-gate mechanism. Lei et al. (2018) introduce the simple recurrent unit (SRU), which improves model parallelization by moving matrix computations outside the recurrence. Nevertheless, both ATR and SRU perform affine transformations of the previous hidden state for gates, though SRU employs a vector parameter rather than a matrix parameter. In addition, SRU heavily relies on its highway component, without which the recurrent component itself suffers from weak capacity and generalization (Lei et al., 2018). In this paper, we propose a lightweight recurrent network (LRN), which combines the strengths of ATR and SRU. The structure of LRN is simple: an input gate and a forget gate are applied to weight the current input and previous hidden state, respectively. LRN has fewer parameters than SRU, and compared to ATR, removes heavy calculations outside the recurrence, generating gates based on the previous hidden state without any affine transformation. In this way, computation inside each recurrent step is highly minimized, allowing better parallelization and higher speed. The gate structure endows LRN with the capability of memorizing distant tokens as well as handling the gradient vanishing and explosion issue. This ensures LRN’s expressiveness and performance on downstream tasks. In addition, decomposing its recurrent structure discovers the correlation of input/forget gate with key/query in selfattention networks (Vaswani et al., 2017), where 1539 these two gates together manipulate the weight assigned to each token. We also reveal how LRN manages long-term and short-term memories with the decomposition. We carry out extensive experiments on six NLP tasks, ranging from natural language inference, document classification, machine translation, question answering and part-of-speech tagging to language modeling. We use LRN as a drop-in replacement of existing recurrent units in different neural models without any other modification of model structure. Experimental results show that LRN outperforms SRU by 10%∼20% in terms of running speed, and is competitive with respect to performance and generalization compared against all existing recurrent units. 2 Related Work Past decades have witnessed the rapid development of RNNs since the Elman structure was proposed (Elman, 1990). Bengio et al. (1994) point out that the gradient vanishing and explosion issue impedes the optimization and performance of RNNs. To handle this problem, Hochreiter and Schmidhuber (1997) develop LSTM where information and gradient from distant tokens can successfully pass through the current token via a gate structure and a memory cell. Unfortunately, the enhanced expressivity via complex gates comes at the cost of sacrificing computational efficiency, which becomes more severe when datasets are scaled up. Simplifying computation but keeping model capacity in RNNs raises a new challenge. One direction is to remove redundant structures in LSTM. Cho et al. (2014) remove the memory cell and introduce the gated recurrent unit (GRU) with only two gates. Lee et al. (2017) introduce an additive structure to generate hidden representations with linear transformed inputs directly, though we empirically observe that non-linear activations can stabilize model training. Zhang et al. (2018) propose a twin-gate mechanism where input and forget gate are simultaneously produced from the same variables. We extend this mechanism by removing the affine transformation of previous hidden states. Another direction is to shift recurrent matrix multiplications outside the recurrence so as to improve the parallelization of RNNs. Bradbury et al. (2016) propose the quasi-recurrent network (QRNN). QRNN factors all matrix multiplications out of the recurrence and employs a convolutional network to capture local input patterns. A minimal recurrent pooling function is used in parallel across different channels to handle global input patterns. Lei et al. (2017) apply the kernel method to simplify recurrence and show improved model capacity with deep stacked RNNs. This idea is extended to SRU (Lei et al., 2018) where a minimal recurrent component is strengthened via an external highway layer. The proposed LRN falls into this category with the advantage over SRU of the non-dependence on the highway component. Orthogonal to the above work, recent studies also show the potential of accelerating matrix computation with low-level optimization. Diamos et al. (2016) emphasize persistent computational kernels to exploit GPU’s inverted memory hierarchy for reusing/caching purpose. Appleyard et al. (2016) upgrade NIVIDIA’s cuDNN implementation through exposing parallelism between operations within the recurrence. Kuchaiev and Ginsburg (2017) reduce the number of model parameters by factorizing or partitioning LSTM matrices. In general, all these techniques can be applied to any recurrent units to reduce computational overhead. Our work is closely related with ATR and SRU. Although recent work shows that novel recurrent units derived from weighted finite state automata are effective without the hidden-to-hidden connection (Balduzzi and Ghifary, 2016; Peng et al., 2018), we empirically observe that including previous hidden states for gates is crucial for model capacity which also resonates with the evolution of SRU. Unlike ATR and SRU, however, we demonstrate that the affine transformation on the previous hidden state for gates is unnecessary. In addition, our model has a strong connection with selfattention networks. 3 Lightweight Recurrent Network Given a sequence of input X = [x⊺ 1; x⊺ 2; . . . ; x⊺ n] ∈ Rn×d with length of n, LRN operates as follows2: Q, K, V = XWq, XWk, XWv (1) it = σ(kt + ht−1) (2) ft = σ(qt −ht−1) (3) ht = g(it ⊙vt + ft ⊙ht−1) (4) 2Bias terms are removed for clarity. 1540 where Wq, Wk, Wv ∈Rd×d are model parameters and g(·) is an activation function, such as identity and tanh. ⊙and σ(·) indicate the elementwise multiplication and sigmoid activation function, respectively. qt, kt and vt correspond to the t-th row of the projected sequence representation Q, K, V. We use the term q, k and v to denote the implicit correspondence to query, key and value in self-attention networks which is elaborated in the next section. As shown in Eq. (1), all matrix-related operations are shifted outside the recurrence and can be pre-calculated, thereby reducing the complexity of the recurrent computation from O(d2) to O(d) and easing model parallelization. The design of the input gate it and forget gate ft is inspired by the twin-gate mechanism in ATR (Zhang et al., 2018). Unlike ATR, however, we eschew the affine transformation on the previous hidden state. By doing so, the previous hidden state directly offers positive contribution to the input gate but negative to the forget gate, ensuring adverse correlation between these two gates. The current hidden state ht is a weighted average of the current input and the previous hidden state followed by an element-wise activation. When identity function is employed, our model shows analogous properties to ATR. However, we empirically observe that this leads to gradually increased hidden representation values, resulting in optimization instability. Unlike SRU, which controls stability through a particular designed scaling term, we replace the identity function with the tanh function, which is simple but effective. 4 Structure Decomposition In this section, we show an in-depth analysis of LRN by decomposing the recurrent structure. With an identity activation, the t-th hidden state can be expanded as follows: ht = t X k=1 ik ⊙ t−k Y l=1 fk+l ! ⊙vk, (5) where the representation of the current token is composed of all previous tokens with their contribution distinguished by both input and forget gates. Relation with self-attention network. After grouping these gates, we observe that: ht = t X k=1 ik |{z} key(K) ⊙fk+1 ⊙· · · ⊙ft | {z } query(Q) ⊙ vk |{z} value(V) . (6) Each weight can be regarded as a query from the current token ft to the k-th input token ik. This query chain can be decomposed into two parts: a key represented by ik and a query represented by Qt−k l=1 fk+l. The former is modulated through the weight matrix Wk, and tightly associated with the corresponding input token. Information carried by the key remains intact during the evolution of time step t. In contrast, the latter, induced by the weight matrix Wq, highly depends on the position and length of this chain, which dynamically changes between different token pairs. The weights generated by keys and queries are assigned to values represented by vk and manipulated by the weight matrix Wv. Compared with self-attention networks, LRN shows analogous weight parameters and model structure. The difference is that weights in self-attention networks are normalized across all input tokens. Instead, weights in LRN are unidirectional, unnomalized and spanned over all channels. Memory in LRN Alternatively, we can view the gating mechanism in LRN as a memory that gradually forgets information. Given the value representation at k-th time step vk, the information delivered to later time step t (k < t) in LRN is as follows: ik |{z} short term ⊙fk+1 ⊙· · · ⊙ft | {z } forget chain (long term) ⊙vk. (7) The input gate ik indicates the moment that LRN first accesses the input token xk, whose value reflects the amount of information or knowledge allowed from this token. A larger input gate corresponds to a stronger input signal, thereby a large change of activating short-term memory. This information is then delivered through a forget chain where memory is gradually decayed by a forget gate at each time step. The degree of memory decaying is dynamically controlled by the input sequence itself. When a new incoming token is more informative, the forget gate would increase so that previous knowledge is erased so as to make way for new knowledge in the memory. By contrast, meaningless tokens would be simply ignored. 1541 Model #Params Base +LN +BERT +LN+BERT ACC Time ACC Time ACC Time ACC Time Rockt¨aschel et al. (2016) 250K 83.50 This LSTM 8.36M 84.27 0.262 86.03 0.432 89.95 0.544 90.49 0.696 GRU 6.41M 85.71 0.245 86.05 0.419 90.29 0.529 90.10 0.695 ATR 2.87M 84.88 0.210 85.81 0.307 90.00 0.494 90.28 0.580 Work SRU 5.48M 84.28 0.258 85.32 0.283 89.98 0.543 90.09 0.555 LRN 4.25M 84.88 0.209 85.06 0.223 89.98 0.488 89.93 0.506 Table 1: Test accuracy (ACC) on SNLI task. “#Params”: the parameter number of Base. Base and LN denote the baseline model and layer normalization respectively. Time: time in seconds per training batch measured from 1k training steps on GeForce GTX 1080 Ti. Best results are highlighted in bold. 5 Gradient Analysis Gradient vanishing and explosion are the bottleneck that impedes training of vanilla RNNs (Pascanu et al., 2013). Consider a vanilla RNN formulated as follows: ht = g(Wxt + Uht−1). (8) The gradient back-propagated from the t-th step heavily depends on the following one-step derivation: ∂ht ∂ht−1 = UT g′. (9) Due to the chain rule, the recurrent weight matrix U will be repeatedly multiplied along the sequence length. Gradient vanishing/explosion results from a weight matrix with small/large norm (Pascanu et al., 2013). In LRN, however, the recurrent weight matrix is removed. The current hidden state is generated by directly weighting the current input and the previous hidden state. The one-step derivation of Eq. (2-4) is as follows: ∂ht ∂ht−1 = σ′ i ⊙vt −ht−1 ⊙σ′ f + ft  | {z } A ⊙g′ (10) where σ′ i and σ′ f denote the derivation of Eq. (2) and Eq. (3), respectively. The difference between Eq. (9) and Eq. (10) is that the recurrent weight matrix is substituted by a more expressive component denoted as A in Eq. (10). Unlike the weight matrix U, the norm of A is input-dependent and varies dynamically along different positions. The dependence on inputs provides LRN with the capability of avoiding gradient vanishing/explosion. 6 Experiments We verify the effectiveness of LRN on six diverse NLP tasks. For each task, we adopt (near) state-of-the-art neural models with RNNs handling sequence representation. We compare LRN with several cutting-edge recurrent units, including LSTM, GRU, ATR and SRU. For all comparisons, we keep the neural architecture intact and only alter the recurrent unit.3 All RNNs are implemented without specialized cuDNN kernels. Unless otherwise stated, different models on the same task share the same set of hyperparameters. 6.1 Natural Language Inference Settings Natural language inference reasons about the entailment relationship between a premise sentence and a hypothesis sentence. We use the Stanford Natural Language Inference (SNLI) corpus (Bowman et al., 2015) and treat the task as a three-way classification task. This dataset contains 549,367 premise-hypothesis pairs for training, 9,842 pairs for developing and 9,824 pairs for testing. We employ accuracy for evaluation. We implement a variant of the word-by-word attention model (Rockt¨aschel et al., 2016) using Tensorflow for this task, where we stack two additional bidirectional RNNs upon the final sequence representation and incorporate character embedding for word-level representation. The pretrained GloVe (Pennington et al., 2014) word vectors are used to initialize word embedding. We also integrate the base BERT (Devlin et al., 2018) to improve contextual modeling. 3Due to possible dimension mismatch, we include an additional affine transformation on the input matrix for the highway component in SRU. In addition, we only report and compare speed statistics when all RNNs are optimally implemented where computations that can be done before the recurrence are moved outside. 1542 Model #Params AmaPolar Yahoo AmaFull YelpPolar ERR Time ERR Time ERR Time ERR Time Zhang et al. (2015) 6.10 29.16 40.57 5.26 This LSTM 227K 4.37 0.947 24.62 1.332 37.22 1.003 3.58 1.362 GRU 176K 4.39 0.948 24.68 1.242 37.20 0.982 3.47 1.230 ATR 74K 4.78 0.867 25.33 1.117 38.54 0.836 4.00 1.124 Work SRU 194K 4.95 0.919 24.78 1.394 38.23 0.907 3.99 1.310 LRN 151K 4.98 0.731 25.07 1.038 38.42 0.788 3.98 1.022 Table 2: Test error (ERR) on document classification task. “#Params”: the parameter number in AmaPolar task. Time: time in seconds per training batch measured from 1k training steps on GeForce GTX 1080 Ti. We set the character embedding size and the RNN hidden size to 64 and 300 respectively. Dropout is applied between consecutive layers with a rate of 0.3. We train models within 10 epochs using the Adam optimizer (Kingma and Ba, 2014) with a batch size of 128 and gradient norm limit of 5.0. We set the learning rate to 1e−3, and apply an exponential moving average to all model parameters with a decay rate of 0.9999. These hyperparameters are tuned according to development performance. Results Table 1 shows the test accuracy and training time of different models. Our implementation outperforms the original model where Rockt¨aschel et al. (2016) report an accuracy of 83.50. Overall results show that LRN achieves competitive performance but consumes the least training time. Although LSTM and GRU outperform LRN by 0.3∼0.9 in terms of accuracy, these recurrent units sacrifice running efficiency (about 7%∼48%) depending on whether LN and BERT are applied. No significant performance difference is observed between SRU and LRN, but LRN has fewer model parameters and shows a speedup over SRU of 8%∼21%. Models with layer normalization (LN) (Ba et al., 2016) tend to be more stable and effective. However, for LSTM, GRU and ATR, LN results in significant computational overhead (about 27%∼71%). In contrast, quasi recurrent models like SRU and LRN only suffer a marginal speed decrease. This is reasonable because layer normalization is moved together with matrix multiplication out of the recurrence. Results with BERT show that contextual information is valuable for performance improvement. LRN obtains additional 4 percentage points gain with BERT and reaches an accuracy of around 89.9. This shows the compatibility of LRN with existing pretrained models. In addition, although the introduction of BERT brings in heavy matrix computation, the benefits from LRN do not disappear. LRN is still the fastest model, outperforming other recurrent units by 8%∼27%. 6.2 Document Classification Settings Document classification poses challenges in the form of long-range dependencies where information from distant tokens that contribute to the correct category should be captured. We use Amazon Review Polarity (AmaPolar, 2 labels, 3.6M/0.4M for training/testing), Amazon Review Full (AmaFull, 5 labels, 3M/0.65M for training/testing), Yahoo! Answers (Yahoo, 10 labels, 1.4M/60K for training/testing) and Yelp Review Polarity (YelpPolar, 2 labels, 0.56M/38K for training/testing) from Zhang et al. (2015) for experiments. We randomly select 10% of training data for validation. Models are evaluated by test error. We treat a document as a sequence of words. Our model is a bidirectional RNN followed by an attentive pooling layer. The word-level representation is composed of a pretrained GloVe word vector and a convolutional character vector. We use Tensorflow for implementation and do not use layer normalization. We set character embedding size to 32, RNN hidden size to 64 and dropout rate to 0.1. Model parameters are tuned by Adam optimizer with initial learning rate of 1e−3. Gradients are clipped when their norm exceeds 5. We limit the maximum document length to 400 and maximum training epochs to 6. Parameters are smoothed by an exponential moving average with a decay rate of 0.9999. These hyperparameters are tuned according to development performance. Results Table 2 summarizes the classification results. LRN achieves comparable classification performance against ATR and SRU, but slightly 1543 Model #Params BLEU Train Decode GNMT 24.61 GRU 206M 26.28 2.67 45.35 ATR 122M 25.70 1.33 34.40 SRU 170M 25.91 1.34 42.84 LRN 143M 26.26 0.99 36.50 oLRN 164M 26.73 1.15 40.19 Table 3: Case-insensitive tokenized BLEU score on WMT14 English-German translation task. Train: time in seconds per training batch measured from 0.2k training steps on Tesla P100. Decode: time in milliseconds used to decode one sentence measured on newstest2014 dataset. underperforms LSTM and GRU (-0.45∼-1.22). This indicates that LRN is capable of handling long-range dependencies though not as strong as complex recurrent units. Instead, the simplification endows LRN with less computational overhead than these units. Particularly, LRN accelerates the training over LSTM and SRU by about 20%, or several days of training time on GeForce GTX 1080 Ti.4 6.3 Machine Translation Settings Machine translation is the task of transforming meaning from a source language to a target language. We experiment with the WMT14 English-German translation task (Bojar et al., 2014) which consists of 4.5M training sentence pairs.5 We use newstest2013 as our development set and newstest2014 as our test set. Casesensitive tokenized BLEU score is used for evaluation. We implement a variant of the GNMT system (Wu et al., 2016) using Tensorflow, enhanced with residual connections, layer normalization, label smoothing, a context-aware component (Zhang et al., 2017) and multi-head attention (Vaswani et al., 2017). Byte-pair encoding (Sennrich et al., 2016) is used to reduce the vocabulary size to 32K. We set the hidden size and embedding size to 1024. Models are trained using Adam optimizer with adaptive learning rate sched4We notice that ATR operates faster than SRU. This is because though in theory SRU can be highly optimized for parallelization, computational framework like Tensorflow can not handle it automatically and the smaller amount of calculation in ATR has more advantage in practice. 5Preprocessed data is available at (Zhang et al., 2018): https://drive.google.com/open?id= 15WRLfle66CO1zIGKbyz0FsFmUcINyb4X. Model #Params Base +Elmo rnet* 71.1/79.5 -/LSTM 2.67M 70.46/78.98 75.17/82.79 GRU 2.31M 70.41/79.15 75.81/83.12 ATR 1.59M 69.73/78.70 75.06/82.76 SRU 2.44M 69.27/78.41 74.56/82.50 LRN 2.14M 70.11/78.83 76.14/83.83 Table 4: Exact match/F1-score on SQuad dataset. “#Params”: the parameter number of Base. rnet*: results published by Wang et al. (2017). ule (Chen et al., 2018). We cut gradient norm to 1.0 and set the token size to 32K. Label smoothing rate is set to 0.1. Model Variant Apart from LRN, we develop an improved variant for machine translation that includes an additional output gate. Formally, we change the Eq. (4) to the following one: ct = it ⊙vt + ft ⊙ht−1 (11) ot = σ(Woxt −ct) (12) ht = ot ⊙ct (13) We denote this variant oLRN. Like LRN, the added matrix transformation in oLRN can be shifted out of the recurrence, bringing in little computational overhead. The design of this output gate ot is inspired by the LSTM structure, which acts as a controller to adjust information flow. In addition, this gate helps stabilize the hidden activation to avoid value explosion, and also improves model fitting capacity. Results The results in Table 3 show that translation quality of LRN is slightly worse than that of GRU (-0.02 BLEU). After incorporating the output gate, however, oLRN yields the best BLEU score of 26.73, outperforming GRU (+0.45 BLEU). In addition, the training time results in Table 3 confirm the computational advantage of LRN over all other recurrent units, where LRN speeds up over ATR and SRU by approximately 25%. For decoding, nevertheless, the autoregressive schema of GNMT disables position-wise parallelization. In this case, the recurrent unit with the least computation operations, i.e. ATR, becomes the fastest. Still, both LRN and oLRN translate sentences faster than SRU (+15%/+6%). 6.4 Reading Comprehension Settings Reading comprehension aims at providing correct answers to a query based on a 1544 Model #Params PTB WT2 Base +Finetune +Dynamic Base +Finetune +Dynamic Yang et al. (2018) 22M 55.97 54.44 47.69 63.33 61.45 40.68 This LSTM 22M 63.78 62.12 53.11 69.78 68.68 44.60 GRU 17M 69.09 67.61 60.21 73.37 73.05 49.77 ATR 9M 66.24 65.86 58.29 75.36 73.35 48.65 Work SRU 13M 69.64 65.29 60.97 85.15 84.97 57.97 LRN 11M 61.26 61.00 54.45 69.91 68.86 46.97 Table 5: Test perplexity on PTB and WT2 language modeling task. “#Params”: the parameter number in PTB task. Finetune: fintuning the model after convergence. Dynamic dynamic evaluation. Lower perplexity indicates better performance. Model #Params NER LSTM* 90.94 LSTM 245K 89.61 GRU 192K 89.35 ATR 87K 88.46 SRU 161K 88.89 LRN 129K 88.56 Table 6: F1 score on CoNLL-2003 English NER task. “#Params”: the parameter number in NER task. LSTM* denotes the reported result (Lample et al., 2016). given document, which involves complex sentence matching, reasoning and knowledge association. We use the SQuAD corpus (Rajpurkar et al., 2016) for this task and adopt span-based extraction method. This corpus contains over 100K document-question-answer triples. We report exact match (EM) and F1-score (F1) on the development set for evaluation. We employ the public available rnet model (Wang et al., 2017)6 in Tensorflow. We use the default model settings: character embedding size 8, hidden size 75, batch size 64, and Adadelta optimizer (Zeiler, 2012) with initial learning rate of 0.5. Gradient norm is cut to 5.0. We also experiment with Elmo (Peters et al., 2018), and feed the Elmo representation in before the encoding layer and after the matching layer with a dropout of 0.5. Results Table 4 lists the EM/F1 score of different models. In this task, LRN outperforms ATR and SRU in terms of both EM and F1 score. After integrating Elmo for contextual modeling, the performance of LRN reaches the best (76.14 6https://github.com/HKUST-KnowComp/ R-Net EM and 83.83 F1), beating both GRU and LSTM (+0.33EM, +0.71F1). As recent studies show that cases in SQuAD are dominated by local pattern matching (Jia and Liang, 2017), we argue that LRN is good at handling local dependencies. 6.5 Named Entity Recognition Settings Named entity recognition (NER) classifies named entity mentions into predefined categories. We use the CoNLL-2003 English NER dataset (Tjong Kim Sang and De Meulder, 2003) and treat NER as a sequence labeling task. We use the standard train, dev and test split. F1 score is used for evaluation. We adopt the bidirectional RNN with CRF inference architecture (Lample et al., 2016). We implement different models based on the public codebase in Tensorflow.7 We use the default hyperparameter settings. Word embedding is initialized by GloVe vectors. Results As shown in Table 68, the performance of LRN matches that of ATR and SRU, though LSTM and GRU operate better (+1.05 and +0.79). As in the SQuAD task, the goal of NER is to detect local entity patterns and figure out the entity boundaries. However, the performance gap between LSTM/GRU and LRN in NER is significantly larger than that in SQuAD. We ascribe this to the weak model architecture and the small scale NER dataset where entity patterns are not fully captured by LRN. 6.6 Language Modeling Settings Language modeling aims to estimate the probability of a given sequence, which re7https://github.com/Hironsan/anago 8Notice that our implementation falls behind the original model (Lample et al., 2016) because we do not use specifically trained word embedding. 1545 Figure 1: The decay curve of each token modulated by input and forget gates along the token position. Notice how the memory of term “great” flows to the final state shown in red, and contributes to a Positive decision. Weight denotes the averaged activation of ik ⊙ Qt−k l=1 fk+l  as shown in Eq. (5). quires models to memorize long-term structure of language. We use two widely used datasets, Penn Treebank (PTB) (Mikolov et al., 2010) and WikiText-2 (WT2) (Merity et al., 2016) for this task. Models are evaluated by perplexity. We modify the mixture of softmax model (MoS) (Yang et al., 2018)9 in PyTorch to include different recurrent units. We apply weight dropout to all recurrent-related parameters instead of only hidden-to-hidden connection. We follow the experimental settings of MoS, and manually tune the initial learning rate based on whether training diverges. Results Table 5 shows the test perplexity of different models.10 In this task, LRN significantly outperforms GRU, ATR and SRU, and achieves near the same perplexity as LSTM. This shows that in spite of its simplicity, LRN can memorize longterm language structures and capture a certain degree of language variation. In summary, LRN generalizes well to different tasks and can be used as a drop-in replacement of existing recurrent units. 6.7 Ablation Study Part of LRN can be replaced with some alternatives. In this section, we conduct ablation analysis to examine two possible designs: gLRN The twin-style gates in Eq. (2-3) can be re9https://github.com/zihangdai/mos 10Our re-implementation of LSTM model is worse than the original model (Yang et al., 2018) because the system is sensitive to hyperparameters, and we apply weight dropout to all LSTM parameters which makes the original best choices not optimal. Model SNLI PTB LRN 85.06 61.26 gLRN 84.72 92.49 eLRN 83.56 169.81 Table 7: Test accuracy on SNLI task with Base+LN setting and test perplexity on PTB task with Base setting. placed with a general one: ft = σ(qt −ht−1), it = 1 −ft. (14) In this way, input and forget gate are inferable from each other with the key weight parameter removed. eLRN The above design can be further simplified into an extreme case where the forget gate is only generated from the previous hidden state without the query vector: ft = σ(−ht−1), it = 1 −ft. (15) We experiment with SNLI and PTB tasks. Results in Table 7 show that although the accuracy on SNLI is acceptable, gLRN and eLRN perform significantly worse on the PTB task. This suggests that these alternative structures suffer from weak generalization. 6.8 Structure Analysis In this section, we provide a visualization to check how the gates work in LRN. We experiment with a unidirectional LRN on the AmaPolar dataset, where the last hidden state 1546 is used for document classification. Figure 1 shows the decay curve of each token along the token position. The memory curve of each token decays over time. However, important clues that contribute significantly to the final decision, as the token “great” does, decrease slowly, as shown by the red curve. Different tokens show different decay rate, suggesting that input and forget gate are capable of learning to propagate relevant signals. All these demonstrate the effectiveness of our LRN model. 7 Conclusion and Future Work This paper presents LRN, a lightweight recurrent network that factors matrix operations outside the recurrence and enables higher parallelization. Theoretical and empirical analysis shows that the input and forget gate in LRN can learn long-range dependencies and avoid gradient vanishing and explosion. LRN has a strong correlation with selfattention networks. Experiments on six different NLP tasks show that LRN achieves competitive performance against existing recurrent units. It is simple, effective and reaches better trade-off among parameter number, running speed, model performance and generalization. In the future, we are interested in testing lowlevel optimizations of LRN, which are orthogonal to this work, such as dedicated cuDNN kernels. Acknowledgments We thank the reviewers for their insightful comments. Biao Zhang acknowledges the support of the Baidu Scholarship. This work has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service (http://www.hpc.cam.ac.uk) funded by EPSRC Tier-2 capital grant EP/P020259/1. References Jeremy Appleyard, Tomas Kocisky, and Phil Blunsom. 2016. Optimizing performance of recurrent neural networks on gpus. arXiv preprint arXiv:1604.01946. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. David Balduzzi and Muhammad Ghifary. 2016. Strongly-typed recurrent neural networks. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, pages 1292–1300. JMLR.org. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166. Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleˇs Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12–58, Baltimore, Maryland, USA. Association for Computational Linguistics. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2016. Quasi-recurrent neural networks. arXiv preprint arXiv:1611.01576. Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Mike Schuster, Noam Shazeer, Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 76–86. Association for Computational Linguistics. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Greg Diamos, Shubho Sengupta, Bryan Catanzaro, Mike Chrzanowski, Adam Coates, Erich Elsen, Jesse Engel, Awni Hannun, and Sanjeev Satheesh. 2016. Persistent rnns: Stashing recurrent weights on-chip. In International Conference on Machine Learning, pages 2024–2033. Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179–211. 1547 Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031. Association for Computational Linguistics. Seonhoon Kim, Jin-Hyuk Hong, Inho Kang, and Nojun Kwak. 2019. Semantic sentence matching with densely-connected recurrent and co-attentive information. AAAI19. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Oleksii Kuchaiev and Boris Ginsburg. 2017. Factorization tricks for lstm networks. arXiv preprint arXiv:1703.10722. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Association for Computational Linguistics. Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2017. Recurrent additive networks. arXiv preprint arXiv:1705.07393. T. Lei, W. Jin, R. Barzilay, and T. Jaakkola. 2017. Deriving neural architectures from sequence and graph kernels. In International Conference on Machine Learning (ICML). Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, and Yoav Artzi. 2018. Simple recurrent units for highly parallelizable recurrence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4470–4481. Association for Computational Linguistics. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International conference on machine learning, pages 1310–1318. Hao Peng, Roy Schwartz, Sam Thomson, and Noah A. Smith. 2018. Rational recurrences. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1203–1214. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Association for Computational Linguistics. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In International Conference on Learning Representations (ICLR). Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725. Association for Computational Linguistics. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 - Volume 4, CONLL ’03, pages 142–147, Stroudsburg, PA, USA. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 189–198. Association for Computational Linguistics. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus 1548 Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. 2018. Breaking the softmax bottleneck: A high-rank rnn language model. ICLR. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. Biao Zhang, Deyi Xiong, and Jinsong Su. 2018. Accelerating neural transformer via an average attention network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1789–1798, Melbourne, Australia. Association for Computational Linguistics. Biao Zhang, Deyi Xiong, and Jinsong Su. 2018. Neural machine translation with deep attention. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–1. Biao Zhang, Deyi Xiong, Jinsong Su, and Hong Duan. 2017. A context-aware recurrent encoder for neural machine translation. IEEE/ACM Trans. Audio, Speech and Lang. Proc., 25(12):2424–2432. Biao Zhang, Deyi Xiong, jinsong su, Qian Lin, and Huiji Zhang. 2018. Simplifying neural machine translation with addition-subtraction twin-gated recurrent networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4273–4283. Association for Computational Linguistics. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS’15, pages 649–657, Cambridge, MA, USA. MIT Press.
2019
149
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 151–164 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 151 Massively Multilingual Transfer for NER Afshin Rahimi∗ Yuan Li∗ Trevor Cohn School of Computing and Information Systems The University of Melbourne [email protected] {rahimia,t.cohn}@unimelb.edu.au Abstract In cross-lingual transfer, NLP models over one or more source languages are applied to a lowresource target language. While most prior work has used a single source model or a few carefully selected models, here we consider a “massive” setting with many such models. This setting raises the problem of poor transfer, particularly from distant languages. We propose two techniques for modulating the transfer, suitable for zero-shot or few-shot learning, respectively. Evaluating on named entity recognition, we show that our techniques are much more effective than strong baselines, including standard ensembling, and our unsupervised method rivals oracle selection of the single best individual model.1 1 Introduction Supervised learning remains king in natural language processing, with most tasks requiring large quantities of annotated corpora. The majority of the world’s 6,000+ languages however have limited or no annotated text, and therefore much of the progress in NLP has yet to be realised widely. Cross-lingual transfer learning is a technique which can compensate for the dearth of data, by transferring knowledge from high- to lowresource languages, which has typically taken the form of annotation projection over parallel corpora or other multilingual resources (Yarowsky et al., 2001; Hwa et al., 2005), or making use of transferable representations, such as phonetic transcriptions (Bharadwaj et al., 2016), closely related languages (Cotterell and Duh, 2017) or bilingual dictionaries (Mayhew et al., 2017; Xie et al., 2018). Most methods proposed for cross-lingual transfer rely on a single source language, which limits the transferable knowledge to only one source. ∗Both authors contributed equally to this work. 1The code and the datasets will be made available at https://github.com/afshinrahimi/mmner. The target language might be similar to many source languages, on the grounds of the script, word order, loan words etc, and transfer would benefit from these diverse sources of information. There are a few exceptions, which use transfer from several languages, ranging from multitask learning (Duong et al., 2015; Ammar et al., 2016; Fang and Cohn, 2017), and annotation projection from several languages (T¨ackstr¨om, 2012; Fang and Cohn, 2016; Plank and Agi´c, 2018). However, to the best of our knowledge, none of these approaches adequately account for the quality of transfer, but rather “weight” the contribution of each language uniformly. In this paper, we propose a novel method for zero-shot multilingual transfer, inspired by research in truth inference in crowd-sourcing, a related problem, in which the ‘ground truth’ must be inferred from the outputs of several unreliable annotators (Dawid and Skene, 1979). In this problem, the best approaches estimate each model’s reliability, and their patterns of mistakes (Kim and Ghahramani, 2012). Our proposed model adapts these ideas to a multilingual transfer setting, whereby we learn the quality of transfer, and language-specific transfer errors, in order to infer the best labelling in the target language, as part of a Bayesian graphical model. The key insight is that while the majority of poor models make lots of mistakes, these mistakes are diverse, while the few good models consistently provide reliable input. This allows the model to infer which are the reliable models in an unsupervised manner, i.e., without explicit supervision in the target language, and thereby make accurate inferences despite the substantial noise. In the paper, we also consider a supervised setting, where a tiny annotated corpus is available in the target language. We present two methods to use this data: 1) estimate reliability parameters of 152 the Bayesian model, and 2) explicit model selection and fine-tuning of a low-resource supervised model, thus allowing for more accurate modelling of language specific parameters, such as character embeddings, shown to be important in previous work (Xie et al., 2018). Experimenting on two NER corpora, one with as many as 41 languages, we show that single model transfer has highly variable performance, and uniform ensembling often substantially underperforms the single best model. In contrast, our zero-shot approach does much better, exceeding the performance of the single best model, and our few-shot supervised models result in further gains. 2 Approach We frame the problem of multilingual transfer as follows. We assume a collection of H models, all trained in a high resource setting, denoted Mh = {Mh i , i ∈(1, H)}. Each of these models are not well matched to our target data setting, for instance these may be trained on data from different domains, or on different languages, as we evaluate in our experiments, where we use crosslingual embeddings for model transfer. This is a problem of transfer learning, namely, how best we can use the H models for best results in the target language.2 Simple approaches in this setting include a) choosing a single model M ∈ Mh, on the grounds of practicality, or the similarity between the model’s native data condition and the target, and this model is used to label the target data; or b) allowing all models to ‘vote’ in an classifier ensemble, such that the most frequent outcome is selected as the ensemble output. Unfortunately neither of these approaches are very accurate in a cross-lingual transfer setting, as we show in §4, where we show a fixed source language model (en) dramatically underperforms compared to oracle selection of source language, and the same is true for uniform voting. Motivated by these findings, we propose novel methods for learning. For the “zero-shot” setting where no labelled data is available in the target, we propose the BEAuns method inspired by work 2We limit our attention to transfer in a ‘black-box’ setting, that is, given predictive models, but not assuming access to their data, nor their implementation. This is the most flexible scenario, as it allows for application to settings with closed APIs, and private datasets. It does, however, preclude multitask learning, as the source models are assumed to be static. V (j) π zi yij β α i = 1 . . . N j = 1 . . . H Figure 1: Plate diagram for the BEA model. in truth inference from crowd-sourced datasets or diverse classifiers (§2.1). To handle the “few-shot” case §2.2 presents a rival supervised technique, RaRe, based on using very limited annotations in the target language for model selection and classifier fine-tuning. 2.1 Zero-Shot Transfer One way to improve the performance of the ensemble system is to select a subset of component models carefully, or more generally, learn a non-uniform weighting function. Some models do much better than others, on their own, so it stands to reason that identifying these handful of models will give rise to better ensemble performance. How might we proceed to learn the relative quality of models in the setting where no annotations are available in the target language? This is a classic unsupervised inference problem, for which we propose a probabilistic graphical model, inspired by Kim and Ghahramani (2012). We develop a generative model, illustrated in Figure 1, of the transfer models’ predictions, yij, where i ∈[1, N] is an instance (a token or an entity span), and j ∈[1, H] indexes a transfer model. The generative process assumes a ‘true’ label, zi ∈ [1, K], which is corrupted by each transfer model, in producing the prediction, yij. The corruption process is described by P(yij = l|zi = k, V (j)) = V (j) kl , where V (j) ∈ RK×K is the confusion matrix specific to a transfer model. To complete the story, the confusion matrices are drawn from vague row-wise independent Dirichlet priors, with a parameter α = 1, and the true labels are governed by a Dirichlet prior, π, which is drawn from an uninformative Dirichlet distribution with a parameter β = 1. This generative model is referred to as BEA. Inference under the BEA model involves ex153 plaining the observed predictions Y in the most efficient way. Where several transfer models have identical predictions, k, on an instance, this can be explained by letting zi = k,3 and the confusion matrices of those transfer models assigning high probability to V (j) kk . Other, less reliable, transfer models will have divergent predictions, which are less likely to be in agreement, or else are heavily biased towards a particular class. Accordingly, the BEA model can better explain these predictions through label confusion, using the off-diagonal elements of the confusion matrix. Aggregated over a corpus of instances, the BEA model can learn to differentiate between those reliable transfer models, with high V (j) kk and those less reliable ones, with high V (j) kl , l ̸= k. This procedure applies perlabel, and thus the ‘reliability’ of a transfer model is with respect to a specific label, and may differ between classes. This helps in the NER setting where many poor transfer models have excellent accuracy for the outside label, but considerably worse performance for entity labels. For inference, we use mean-field variational Bayes (Jordan, 1998), which learns a variational distribution, q(Z, V, π) to optimise the evidence lower bound (ELBO), log P(Y |α, β) ≥Eq(Z,V,π) log P(Y, Z, V, π|α, β) q(Z, V, π) assuming a fully factorised variational distribution, q(Z, V, π) = q(Z)q(V )q(π). This gives rise to an iterative learning algorithm with update rules: Eq log πk (1a) =ψ β + X i q(zi = k) ! −ψ (Kβ + N) Eq log V (j) kl (1b) =ψ α + X i q(zi = k)1[yij = l] ! −ψ Kα + X i q(zi = k) ! q(zi = k) ∝exp   Eq log πk + X j Eq log V (j) kyij    (2) 3Although there is no explicit breaking of the symmetry of the model, we initialise inference using the majority vote, which results in a bias towards this solution. w1 w2 w3 w4 [1, 4] [2, 4] [3, 4] M h 1 B-ORG I-ORG I-ORG I-ORG ORG O O M h 2 O B-ORG I-ORG I-ORG O ORG O M h 3 O O B-ORG I-ORG O O ORG M h 4 O B-PER I-PER I-PER O PER O M h 5 O B-PER I-PER I-PER O PER O Agg. O B-PER I-ORG I-ORG O PER O Table 1: An example sentence with its aggregated labels in both token view and entity view. Aggregation in token view may generate results inconsistent with the BIO scheme. where ψ is the digamma function, defined as the logarithmic derivative of the gamma function. The sets of rules (1) and (2) are applied alternately, to update the values of Eq log πk, Eq log V (j) kl , and q(zij = k) respectively. This repeats until convergence, when the difference in the ELBO between two iterations is smaller than a threshold. The final prediction of the model is based on q(Z), using the maximum a posteriori label ˆzi = arg maxz q(zi = z). This method is referred to as BEAuns. In our NER transfer task, classifiers are diverse in their F1 scores ranging from almost 0 to around 80, motivating spammer removal (Raykar and Yu, 2012) to filter out the worst of the transfer models. We adopt a simple strategy that first estimates the confusion matrices for all transfer models on all labels, then ranks them based on their mean recall on different entity categories (elements on the diagonals of their confusion matrices), and then runs the BEA model again using only labels from the top k transfer models only. We call this method BEAuns×2 and its results are reported in §4. 2.1.1 Token versus Entity Granularity Our proposed aggregation method in §2.1 is based on an assumption that the true annotations are independent from each other, which simplifies the model but may generate undesired results. That is, entities predicted by different transfer models could be mixed, resulting in labels inconsistent with the BIO scheme. Table 1 shows an example, where a sentence with 4 words is annotated by 5 transfer models with 4 different predictions, among which at most one is correct as they overlap. However, the aggregated result in the token view is a mixture of two predictions, which is supported by no transfer models. To deal with this problem, we consider aggre154 gating the predictions in the entity view. As shown in Table 1, we convert the predictions for tokens to predictions for ranges, aggregate labels for every range, and then resolve remaining conflicts. A prediction is ignored if it conflicts with another one with higher probability. By using this greedy strategy, we can solve the conflicts raised in entitylevel aggregation. We use superscripts tok and ent to denote token-level and entity-level aggregations, i.e. BEAtok uns and BEAent uns. 2.2 Few-Shot Transfer Until now, we have assumed no access to annotations in the target language. However, when some labelled text is available, how might this best be used? In our experimental setting, we assume a modest set of 100 labelled sentences, in keeping with a low-resource setting (Garrette and Baldridge, 2013).4 We propose two models BEAsup and RaRe in this setting. Supervising BEA (BEAsup) One possibility is to use the labelled data to find the posterior for the parameters V (j) and π of the Bayesian model described in §2.1. Let nk be the number of instances in the labelled data whose true label is k, and njkl the number of instances whose true label is k and classifier j labels them as l. Then the quantities in Equation (1) can be calculated as E log πk =ψ(nk) −ψ(N) E log vjkl =ψ(njkl) −ψ X l njkl ! . These are used in Equation (2) for inference on the test set. We refer to this setting as BEAsup. Ranking and Retraining (RaRe) We also propose an alternative way of exploiting the limited annotations, RaRe, which first ranks the systems, and then uses the top ranked models’ outputs alongside the gold data to retrain a model on the target language. The motivation is that the above technique is agnostic to the input text, and therefore is unable to exploit situations where regularities occur, such as common words or character patterns that are indicative of specific class labels, including names, titles, etc. These signals are unlikely to be consistently captured by crosslingual transfer. Training a model on the target 4Garrette and Baldridge (2013) showed that about 100 sentences can be annotated with POS tags in two hours by non-native annotators. language with a character encoder component, can distil the signal that are captured by the transfer models, while relating this towards generalisable lexical and structural evidence in the target language. This on its own will not be enough, as many tokens will be consistently misclassified by most or all of the transfer models, and for this reason we also perform model fine-tuning using the supervised data. The ranking step in RaRe proceeds by evaluating each of the H transfer models on the target gold set, to produce scores sh (using the F1 score). The scores are then truncated to the top k ≤H values, such that sh = 0 for those systems h not ranked in the top k, and normalised ωh = sh Pk j=1 sj . The range of scores are quite wide, covering 0.00 −0.81 (see Figure 2), and accordingly this simple normalisation conveys a strong bias towards the top scoring transfer systems. The next step is a distillation step, where a model is trained on a large unannotated dataset in the target language, such that the model predictions match those of a weighted mixture of transfer models, using ⃗ω = (ω1, . . . , ωH) as the mixing weights. This process is implemented as minibatch scheduling, where the labels for each minibatch are randomly sampled from transfer model h with probability ωh.5 This is repeated over the course of several epochs of training. Finally, the model is fine-tuned using the small supervised dataset, in order to correct for phenomena that are not captured from model transfer, particularly character level information which is not likely to transfer well for all but the most closely related languages. Fine-tuning proceeds for a fixed number of epochs on the supervised dataset, to limit overtraining of richly parameterise models on a tiny dataset. Note that in all stages, the same supervised dataset is used, both in ranking and fine-tuning, and moreover, we do not use a development set. This is not ideal, and generalisation performance would likely improve were we to use additional annotated data, however our meagre use of data is designed for a low resource setting where labelled data is at a premium. 5We show that uniform sampling with few source languages achieves worse performance. 155 3 Experiments 3.1 Data Our primarily evaluation is over a subset of the Wikiann NER corpus (Pan et al., 2017), using 41 out of 282 languages, where the langauges were chosen based on their overlap with multilingual word embedding resources from Lample et al. (2018).6 The NER taggs are in IOB2 format comprising of LOC, PER, and ORG. The distribution of labels is highly skewed, so we created balanced datasets, and partitioned into training, development, and test sets, details of which are in the Appendix. For comparison with prior work, we also evaluate on the CoNLL 2002 and 2003 datasets (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003), which we discuss further in §4. For language-independent word embedding features we use fastText 300 dimensional Wikipedia embeddings (Bojanowski et al., 2017), and map them to the English embedding space using character-identical words as the seed for the Procrustes rotation method for learning bingual embedding spaces from MUSE (Lample et al., 2018).7 Similar to Xie et al. (2018) we don’t rely on a bilingual dictionary, so the method can be easily applied to other languages. 3.2 Model Variations As the sequential tagger, we use a BiLSTM-CRF (Lample et al., 2016), which has been shown to result in state-of-the-art results in high resource settings (Ma and Hovy, 2016; Lample et al., 2016). This model includes both word embeddings (for which we used fixed cross-lingual embeddings) and character embeddings, to form a parameterised potential function in a linear chain conditional random field. With the exception of batch size and learning rate which were tuned (details in Appendix), we kept the architecture and the hyperparameters the same as the published code.8 6With ISO 639-1 codes: af, ar, bg, bn, bs, ca, cs, da, de, el, en, es, et, fa, fi, fr, he, hi, hr, hu, id, it, lt, lv, mk, ms, nl, no, pl, pt, ro, ru, sk, sl, sq, sv, ta, tl, tr, uk and vi. 7 We also experimented with other bilingual embedding methods, including: supervised learning over bilingual dictionaries, which barely affected system performance; and pure-unsupervised methods (Lample et al., 2018; Artetxe et al., 2018), which performed substantially worse. For this reason we use identical word type seeding, which is preferred as it imposes no additional supervision requirement. 8https://github.com/guillaumegenthial/ sequence_tagging We trained models on all 41 languages in both high-resource (HSup) and naive supervised lowresource (LSup) settings, where HSup pre-trained models were used for transfer in a leave-one-out setting, i.e., taking the predictions of 40 models into a single target language. The same BiLSTMCRF is also used for RaRe. To avoid overfitting, we use early stopping based on a validation set for the HSup, and LSup baselines. For RaRe, given that the model is already trained on noisy data, we stop fine-tuning after only 5 iterations, chosen based on the performance for the first four languages. We compare the supervised HSup and LSup monolingual baselines with our proposed transfer models: MV uniform ensemble, a.k.a.“majority vote”; BEAuns×2, BEAuns unsupervised aggregation models, applied to entities or tokens (see §2.1); BEAsup supervised estimation of BEA prior (§2.2); RaRe, RaRe uns supervised ranking and retraining model (§2.2), and uniform ranking without fine-tuning, respectively; and Oracle selecting the best performing single transfer model, based on test performance. We also compare with BWET (Xie et al., 2018) as state-of-the-art in unsupervised NER transfer. BWET transfers the source English training and development data to the target language using bilingual dictionary induction (Lample et al., 2018), and then uses a transformer architecture to compensate for missing sequential information. We used BWET in both CoNLL, and Wikiann datasets by transferring from their corresponding source English data to the target language.9 4 Results We report the results for single source direct transfer, and then show that our proposed multilingual methods outperform majority voting. Then we analyse the choice of source languages, and how it affects transfer.10 Finally we report results on CoNLL NER datasets. 9Because BWET uses identical characters for bilingual dictionary induction, we observed many English loan words in the target language mapped to the same word in the induced bilingual dictionaries. Filtering such dictionary items might improve BWET. 10For detailed results see Table 4 in the Appendix. 156 af ar bg bn bs ca cs de el et fa fi fr he hi hr hu id lt lv mk ms ro ta tl tr uk vi 20 40 60 80 nl fa ru ar hr fr sk nl fr fi de et it es pt cs nl it cs sk bg id es ar en nl ru ca Target Language F1 Top En MV Figure 2: Best source language ( ) compared with en ( ), and majority voting ( ) over all source languages in terms of F1 performance in direct transfer shown for a subset of the 41 target languages (x axis). Worst transfer score, not shown here, is about 0. See §3 for details of models and datasets. 40 60 80 100 0 100 200 5K+ HSup LSup RaRe t10 BEAent sup t10 MVent t3 MVtok t3 BEAent uns×2 t10 BEAent uns BEAtok uns BWET MVent MVtok F1 over 41 langs. Annotation Requirement (#sentences) Figure 3: The mean and standard deviation for the F1 score of the proposed unsupervised models (BEAtok uns and BEAent uns), supervised models (RaRe and BEAent sup t10) compared with state-of-the-art unsupervised model BWET (Xie et al., 2018), high- and lowresource supervised models HSup and LSup, and majority voting (MVtok) in terms of entity level F1 over the 41 languages (40 for BWET) summarised from Table 4. The x axis shows the annotation requirement of each model in the target language where “200” means 100 sentences each for training and development, and “5K+” means using all the available annotation for training and development sets. Points with the same colour/shape have equal data requirement. Direct Transfer The first research question we consider is the utility of direct transfer, and the simple majority vote ensembling method. As shown in Figure 2, using a single model for direct transfer (English: en) is often a terrible choice. The oracle choice of source language model does much better, however it is not always a closely related language (e.g., Italian: it does best for Indonesian: id, despite the target being closer to Malay: ms). Note the collection of Cyrillic languages (bg, mk, uk) where the oracle is substantially better than the majority vote, which is likely due to script differences. The role of script appears to be more important than language family, as seen for Slavic languages where direct transfer works well between between pairs languages using the same alphabet (Cyrillic versus Latin), but much more poorly when there is an alphabet mismatch.11 The transfer relationship is not symmetric e.g., Persian: fa does best for Arabic: ar, but German: de does best for Persian. Figure 2 also shows that ensemble voting is well below the oracle best language, which is likely to be a result of overall high error rates coupled with error correlation between models, and little can be gained from ensembling. Multilingual Transfer We report the results for the proposed low-resource supervised models (RaRe and BEAsup), and unsupervised models (BEAuns and BEAuns×2), summarised as an average over the 41 languages in Figure 3 (see Appendix A for the full table of results). The figure compares against high- and low-resource supervised baselines (HSup and LSup, respectively), and BWET. The best performance is achieved with a high supervision (HSup, F1 = 89.2), while very limited supervision (LSup) results in a considerably lower F1 of 62.1. The results for MVtok show that uniform ensembling of multiple source models is even worse, by about 5 points. Unsupervised zero-shot learning dramatically improves upon MVtok, and BEAent uns outperforms BEAtok uns, showing the effectiveness of inference 11Detailed direct transfer results are shown in Figure 5 in the Appendix. 157 0 5 10 15 20 65 70 75 80 #source languages F1 BEAent uns×2 MVent BEAent uns, oracle BEAent sup RaRe Figure 4: The mean F1 performance of MVent, BEAent sup, BEAent uns×2, BEAent uns, oracle, and RaRe over the 41 languages by the number of source languages. over entities rather than tokens. It is clear that having access to limited annotation in the target language makes a substantial difference in BEAent sup and RaRe with F1 of 74.8 and 77.4, respectively. Further analysis show that majority voting works reasonably well for Romance and Germanic languages, which are well represented in the dataset, but fails miserably compared to single best for Slavic languages (e.g. ru, uk, bg) where there are only a few related languages. For most of the isolated languages (ar, fa, he, vi, ta), explicitly training a model in RaRe outperforms BEAent sup, showing that relying only on aggregation of annotated data has limitations, in that it cannot exploit character and structural features. Choice of Source Languages An important question is how the other models, particularly the unsupervised variants, are affected by the number and choice of sources languages. Figure 4 charts the performance of MV, BEA, and RaRe against the number of source models, comparing the use of ideal or realistic selection methods to attempt to find the best source models. MVent, BEAent sup, and RaRe use a small labeled dataset to rank the source models. BEAent uns, oracle has the access to the perfect ranking of source models based on their real F1 on the test set. BEAuns×2 is completely unsupervised in that it uses its own estimates to rank all source models. MV doesn’t show any benefit with more than 3 source models.12 In contrast, BEA and RaRe con12The sawtooth pattern arises from the increased numbers of ties (broken randomly) with even numbers of inputs. tinue to improve with up to 10 languages. We show that BEA in two realistic scenarios (unsupervised: BEAent uns×2, and supervised: BEAent sup) is highly effective at discriminating between good and bad source models, and thus filtering out the bad models gives the best results. The BEAent uns×2 curve shows the effect of filtering using purely unsupervised signal, which has a positive, albeit mild effect on performance. In BEAent uns, oracle although the source model ranking is perfect, it narrowly outperforms BEA. Note also that neither of the BEA curves show evidence of the sawtooth pattern, i.e., they largely benefit from more inputs, irrespective of their parity. Finally, adding supervision in the target language in RaRe further improves upon the unsupervised models. CoNLL Dataset Finally, we apply our model to the CoNLL-02/03 datasets, to benchmark our technique against related work. This corpus is much less rich than Wikiann used above, as it includes only four languages (en, de, nl, es), and furthermore, the languages are closely related and share the same script. Results in Table 2 show that our methods are competitive with benchmark methods, and, moreover, the use of 100 annotated sentences in the target language (RaRe l) gives good improvements over unsupervised models.13 Results also show that MV does very well, especially MVent, and its performance is comparable to BEA’s. Note that there are only 3 source models and none of them is clearly bad, so BEA estimates that they are similarly reliable which results in little difference in terms of performance between BEA and MV. 5 Related Work Two main approaches for cross-lingual transfer are representation and annotation projection. Representation projection learns a model in a highresource source language using representations that are cross-linguistically transferable, and then directly applies the model to data in the target language. This can include the use of crosslingual word clusters (T¨ackstr¨om et al., 2012) and word embeddings (Ammar et al., 2016; Ni et al., 2017), multitask learning with a closely related high-resource language (e.g. Spanish for Galician) (Cotterell and Duh, 2017), or bridging 13For German because of its capitalisation pattern, we lowercase all the source and target data, and also remove German as a source model for other languages. 158 lang. de es nl en T¨ackstr¨om et al. (2012)p 40.4 59.3 58.4 — Nothman et al. (2013)w 55.8 61.0 64.0 61.3 Tsai et al. (2016)w 48.1 60.6 61.6 — Ni et al. (2017)w, p, d 58.5 65.1 65.4 — Mayhew et al. (2017)w, d 59.1 66.0 66.5 — Xie et al. (2018)0 57.8 72.4 70.4 — our work MVtok, 0 57.4 66.4 71.0 62.1 MVent, 0 57.7 69.0 70.3 64.6 BEAtok, 0 uns 58.2 64.7 70.1 61.2 BEAent, 0 uns 57.8 63.4 70.3 64.8 RaRe 0 uns 59.1 71.8 67.6 67.5 RaRe l 64.0 72.5 72.5 70.0 HSup 79.1 85.7 87.1 89.5 Table 2: The performance of RaRe and BEA in terms of phrase-based F1 on CoNLL NER datasets compared with state-of-the-art benchmark methods. Resource requirements are indicated with superscripts, p: parallel corpus, w: Wikipedia, d: dictionary, l: 100 NER annotation, 0: no extra resources. the source and target languages through phonemic transcription (Bharadwaj et al., 2016) or Wikification (Tsai et al., 2016). In annotation projection, the annotations of tokens in a source sentence are projected to their aligned tokens in the target language through a parallel corpus. Annotation projection has been applied to POS tagging (Yarowsky et al., 2001; Das and Petrov, 2011; Duong et al., 2014; Fang and Cohn, 2016), NER (Zitouni and Florian, 2008; Ehrmann et al., 2011; Agerri et al., 2018), and parsing (Hwa et al., 2005; Ma and Xia, 2014; Rasooli and Collins, 2015a,b). The Bible, Europarl, and recently the Watchtower has been used as parallel corpora, which are limited in genre, size, and language coverage, motivating the use of Wikipedia to create weak annotation for multilingual tasks such as NER (Nothman et al., 2013). Recent advances in (un)supervised bilingual dictionary induction (Gouws and Søgaard, 2015; Duong et al., 2016; Lample et al., 2018; Artetxe et al., 2018; Schuster et al., 2019) have enabled cross-lingual alignment with bilingual dictionaries (Mayhew et al., 2017; Xie et al., 2018). Most annotation projection methods with few exceptions (T¨ackstr¨om, 2012; Plank and Agi´c, 2018) use only one language (often English) as the source language. In multi-source language setting, majority voting is often used to aggregate noisy annotations (e.g. Plank and Agi´c (2018)). Fang and Cohn (2016) show the importance of modelling the annotation biases that the source language(s) might project to the target language. Transfer from multiple source languages: Previous work has shown the improvements of multi-source transfer in NER (T¨ackstr¨om, 2012; Fang et al., 2017; Enghoff et al., 2018), POS tagging (Snyder et al., 2009; Plank and Agi´c, 2018), and parsing (Ammar et al., 2016) compared to single source transfer, however, multi-source transfer might be noisy as a result of divergence in script, phonology, morphology, syntax, and semantics between the source languages, and the target language. To capture such differences, various methods have been proposed: latent variable models (Snyder et al., 2009), majority voting (Plank and Agi´c, 2018), utilising typological features (Ammar et al., 2016), or explicitly learning annotation bias (Fang and Cohn, 2017). Our work is also related to knowledge distillation from multiple source models applied in parsing (Kuncoro et al., 2016) and machine translation(Kim and Rush, 2016; Johnson et al., 2017). In this work, we use truth inference to model the transfer annotation bias from diverse source models. Finally, our work is related to truth inference from crowd-sourced annotations (Whitehill et al., 2009; Welinder et al., 2010), and most importantly from diverse classifiers (Kim and Ghahramani, 2012; Ratner et al., 2017). Nguyen et al. (2017) propose a hidden Markov model for aggregating crowdsourced sequence labels, but only learn per-class accuracies for workers instead of full confusion matrices in order to address the data sparsity problem in crowdsourcing. 6 Conclusion Cross-lingual transfer does not work out of the box, especially when using large numbers of source languages, and distantly related target languages. In an NER setting using a collection of 41 languages, we showed that simple methods such as uniform ensembling do not work well. We proposed two new multilingual transfer models (RaRe and BEA), based on unsupervised transfer, or a supervised transfer setting with a small 100 sentence labelled dataset in the target language. We also compare our results with 159 BWET (Xie et al., 2018), a state-of-the-art unsupervised single source (English) transfer model, and showed that multilingual transfer outperforms it, however, our work is orthogonal to their work in that if training data from multiple source models is created, RaRe and BEA can still combine them, and outperform majority voting. Our unsupervised method, BEAuns, provides a fast and simple way of annotating data in the target language, which is capable of reasoning under noisy annotations, and outperforms several competitive baselines, including the majority voting ensemble, a low-resource supervised baseline, and the oracle single best transfer model. We show that light supervision improves performance further, and that our second approach, RaRe, based on ranking transfer models and then retraining on the target language, results in further and more consistent performance improvements. Acknowledgments This work was supported by a Facebook Research Award and the Defense Advanced Research Projects Agency Information Innovation Office (I2O), under the Low Resource Languages for Emergent Incidents (LORELEI) program issued by DARPA/I2O under Contract No. HR0011-15C-0114. The views expressed are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government. References Rodrigo Agerri, Yiling Chung, Itziar Aldabe, Nora Aranberri, Gorka Labaka, and German Rigau. 2018. Building named entity recognition taggers via parallel corpora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). European Language Resource Association. Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah Smith. 2016. Many languages, one parser. Transactions of the Association for Computational Linguistics, 4:431–444. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789–798, Melbourne, Australia. Akash Bharadwaj, David Mortensen, Chris Dyer, and Jaime Carbonell. 2016. Phonologically aware neural model for named entity recognition in low resource transfer settings. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1462–1472. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Ryan Cotterell and Kevin Duh. 2017. Lowresource named entity recognition with crosslingual, character-level neural conditional random fields. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 91–96. Asian Federation of Natural Language Processing. Dipanjan Das and Slav Petrov. 2011. Unsupervised part-of-speech tagging with bilingual graph-based projections. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 600–609. Alexander Philip Dawid and Allan M Skene. 1979. Maximum likelihood estimation of observer errorrates using the em algorithm. Applied statistics, pages 20–28. Long Duong, Trevor Cohn, Steven Bird, and Paul Cook. 2015. Low resource dependency parsing: Cross-lingual parameter sharing in a neural network parser. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 2631, 2015, Beijing, China, Volume 2: Short Papers, pages 845–850. Long Duong, Trevor Cohn, Karin Verspoor, Steven Bird, and Paul Cook. 2014. What can we get from 1000 tokens? A case study of multilingual POS tagging for resource-poor languages. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 886–897. Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2016. Learning crosslingual word embeddings without bilingual corpora. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1285– 1295. Maud Ehrmann, Marco Turchi, and Ralf Steinberger. 2011. Building a multilingual named entityannotated corpus using annotation projection. In Proceedings of the International Conference Recent Advances in Natural Language Processing 2011, pages 118–124. 160 Jan Vium Enghoff, Søren Harrison, and ˇZeljko Agi´c. 2018. Low-resource named entity recognition via multi-source projection: Not quite there yet? In Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text, pages 195–201. Meng Fang and Trevor Cohn. 2016. Learning when to trust distant supervision: An application to lowresource POS tagging using cross-lingual projection. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 178–186. Meng Fang and Trevor Cohn. 2017. Model transfer for tagging low-resource languages using a bilingual dictionary. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 587–593. Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 595–605. Dan Garrette and Jason Baldridge. 2013. Learning a part-of-speech tagger from two hours of annotation. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 138–147. Stephan Gouws and Anders Søgaard. 2015. Simple task-specific bilingual word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1386–1390. Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara I. Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural Language Engineering, 11(3):311–325. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Michael Irwin Jordan. 1998. Learning in graphical models, volume 89. Springer Science & Business Media. Hyun-Chul Kim and Zoubin Ghahramani. 2012. Bayesian classifier combination. In AISTATS, volume 22 of JMLR Proceedings, pages 619–627. JMLR.org. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Distilling an ensemble of greedy dependency parsers into one mst parser. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1744–1753. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In International Conference on Learning Representations. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074, Berlin, Germany. Association for Computational Linguistics. Xuezhe Ma and Fei Xia. 2014. Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1337–1348, Baltimore, Maryland. Association for Computational Linguistics. Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap translation for cross-lingual named entity recognition. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2536–2545. An Thanh Nguyen, Byron Wallace, Junyi Jessy Li, Ani Nenkova, and Matthew Lease. 2017. Aggregating and predicting sequence labels from crowd annotations. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 299–309. Jian Ni, Georgiana Dinu, and Radu Florian. 2017. Weakly supervised cross-lingual named entity recognition via effective annotation and representation projection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 August 4, Volume 1: Long Papers, pages 1470–1480. Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R. Curran. 2013. Learning multilingual named entity recognition from wikipedia. Artificial Intelligence, 194:151–175. Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Crosslingual name tagging and linking for 282 languages. 161 In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1946–1958. Barbara Plank and ˇZeljko Agi´c. 2018. Distant supervision from disparate sources for low-resource partof-speech tagging. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 614–620. Mohammad Sadegh Rasooli and Michael Collins. 2015a. Density-driven cross-lingual transfer of dependency parsers. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 328–338. Mohammad Sadegh Rasooli and Michael Collins. 2015b. Density-driven cross-lingual transfer of dependency parsers. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 328–338, Lisbon, Portugal. Association for Computational Linguistics. Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher R´e. 2017. Snorkel: Rapid training data creation with weak supervision. Proceedings of the VLDB Endowment, 11(3):269–282. Vikas C. Raykar and Shipeng Yu. 2012. Eliminating spammers and ranking annotators for crowdsourced labeling tasks. J. Mach. Learn. Res., 13:491–518. Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing. Benjamin Snyder, Tahira Naseem, Jacob Eisenstein, and Regina Barzilay. 2009. Adding more languages improves unsupervised multilingual part-of-speech tagging: a bayesian non-parametric approach. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 83–91. Oscar T¨ackstr¨om. 2012. Nudging the envelope of direct transfer methods for multilingual named entity recognition. In Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure, pages 55–63. Oscar T¨ackstr¨om, Ryan McDonald, and Jakob Uszkoreit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 477–487. Erik F. Tjong Kim Sang. 2002. Introduction to the conll-2002 shared task: Language-independent named entity recognition. In Proceedings of the 6th Conference on Natural Language Learning - Volume 20, COLING-02, pages 1–4, Stroudsburg, PA, USA. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 - Volume 4, CONLL ’03, pages 142–147, Stroudsburg, PA, USA. Chen-Tse Tsai, Stephen Mayhew, and Dan Roth. 2016. Cross-lingual named entity recognition via wikification. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 219–228. Peter Welinder, Steve Branson, Serge J. Belongie, and Pietro Perona. 2010. The multidimensional wisdom of crowds. In Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, Vancouver, British Columbia, Canada., pages 2424– 2432. Curran Associates, Inc. Jacob Whitehill, Paul Ruvolo, Tingfan Wu, Jacob Bergsma, and Javier R. Movellan. 2009. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7-10 December 2009, Vancouver, British Columbia, Canada., pages 2035–2043. Curran Associates, Inc. Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A. Smith, and Jaime Carbonell. 2018. Neural crosslingual named entity recognition with minimal resources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 369–379. David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the first international conference on Human language technology research, pages 1– 8. Imed Zitouni and Radu Florian. 2008. Mention detection crossing the language barrier. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 600–609. 162 A Appendices A.1 Hyperparameters We tuned the batch size and the learning rate using development sets in four languages,14 and then fixed these hyperparameters for all other languages in each model. The batch size was 1 sentence in low-resource scenarios (in baseline LSup and fine-tuning of RaRe), and to 100 sentences, in high-resource settings (HSup and the pretraining phase of RaRe). The learning rate was set to 0.001 and 0.01 for the high-resource and low-resource baseline models, respectively, and to 0.005, 0.0005 for the pretraining and fine-tuning phase of RaRe based on development results for the four languages. For CoNLL datasets, we had to decrease the batch size of the pre-training phase from 100 to 20 (because of GPU memory issues). A.2 Cross-lingual Word Embeddings We experimented with Wiki and CommonCrawl monolingual embeddings from fastText (Bojanowski et al., 2017). Each of the 41 languages is mapped to English embedding space using three methods from MUSE: 1) supervised with bilingual dictionaries; 2) seeding using identical character sequences; and 3) unsupervised training using adversarial learning (Lample et al., 2018). The crosslingual mappings are evaluated by precision at k = 1. The resulting cross-lingual embeddings are then used in NER direct transfer in a leave-one-out setting for the 41 languages (41×40 transfers), and we report the mean F1 in Table 3. CommonCrawl doesn’t perform well in bilingual induction despite having larger text corpora, and underperforms in direct transfer NER. It is also evident that using identical character strings instead of a bilingual dictionary as the seed for learning a supervised bilingual mapping barely affects the performance. This finding also applies to few-shot learning over larger ensembles: running RaRe over 40 source languages achieves an average F1 of 77.9 when using embeddings trained with a dictionary, versus 76.9 using string identity instead. For this reason we have used the string identity method in the paper (e.g., Table 4), providing greater portability to language pairs without a bilingual dictionary. Experiments with unsupervised mappings performed substantially worse than supervised methods, and so we didn’t explore these further. 14Afrikaans, Arabic, Bulgarian and Bengali. Transl. Acc. Dir.Transf. F1 Unsup crawl 34 26 wiki 24 21 IdentChar crawl 43 37 wiki 53 44 Sup crawl 50 39 wiki 54 45 Table 3: The effect of the choice of monolingual word embeddings (Common Crawl and Wikipedia), and their cross-lingual mapping on NER direct transfer. Word translation accuracy, and direct transfer NER F1 are averaged over 40 languages. A.3 Direct Transfer Results In Figure 5 the performance of an NER model trained in a high-resource setting on a source language applied on the other 40 target languages (leave-one-out) is shown. An interesting finding is that symmetry does not always hold (e.g. id vs. ms or fa vs. ar). A.4 Detailed Low-resource Results The result of applying baselines, proposed models and their variations, and unsupervised transfer model of Xie et al. (2018) are shown in Table 4. 163 ar he id ms tl vi af nl en de da no sv el tr fa bn hi ta ca es fr it pt ro bg mk ru uk bs cs hr pl sk sl sq lt lv et fi hu Source High Resource Model ar he id ms tl vi af nl en de da no sv el tr fa bn hi ta ca es fr it pt ro bg mk ru uk bs cs hr pl sk sl sq lt lv et fi hu Target Language 3 5 0 22 10 28 50 17 51 3 34 25 5 1 55 0 25 17 4 16 12 15 6 50 10 33 15 7 21 16 38 0 0 32 23 14 20 10 13 12 36 12 4 9 21 29 41 24 44 12 29 32 33 8 38 19 26 3 45 48 48 36 39 28 38 34 23 12 38 12 42 5 41 38 31 4 28 34 40 33 8 28 63 35 49 52 72 64 44 51 53 44 36 41 5 0 3 0 48 46 52 78 44 48 28 24 24 28 50 63 50 73 60 61 35 45 34 59 46 50 4 23 73 54 65 62 70 65 59 65 66 57 26 61 5 0 2 0 65 64 71 72 63 64 20 18 24 27 58 68 59 71 57 66 37 55 38 44 42 60 3 20 64 54 66 39 45 74 45 54 44 47 31 58 2 0 2 0 61 67 61 57 47 69 27 27 28 30 51 54 54 57 51 58 40 36 41 40 48 32 4 18 50 47 33 33 42 52 41 42 40 39 26 44 3 0 2 0 54 53 53 51 49 51 23 18 21 24 41 46 48 48 45 45 32 36 31 38 42 41 10 23 69 59 38 52 79 68 66 74 72 57 42 60 4 0 7 0 55 61 68 74 55 66 31 37 33 37 68 71 68 73 66 62 54 63 56 67 72 70 12 33 68 60 46 59 64 72 70 76 76 60 39 70 8 0 7 1 68 70 73 79 69 71 40 38 38 39 65 75 71 77 70 69 54 60 60 67 71 70 5 21 58 52 45 56 44 60 52 58 52 45 33 58 4 0 4 0 58 60 57 59 57 55 27 28 33 33 49 55 55 58 53 51 38 44 44 48 53 52 8 28 57 53 37 49 57 69 61 66 66 55 35 63 7 0 7 0 55 58 61 65 54 56 29 34 39 32 58 65 60 65 62 58 49 56 54 62 59 61 10 35 72 65 47 60 65 79 74 67 80 62 43 71 5 0 6 0 72 73 74 76 72 71 41 43 45 39 68 76 75 77 75 72 59 68 65 73 74 73 11 33 69 55 41 52 68 78 69 70 74 62 46 69 6 0 6 0 62 72 76 77 73 72 38 42 47 41 70 72 75 78 66 72 59 67 57 71 68 69 21 32 60 56 29 47 63 76 63 73 77 84 41 67 10 0 4 0 63 62 66 61 63 69 33 39 40 48 64 61 65 61 66 66 55 65 62 68 71 64 39 24 28 35 23 7 5 9 15 22 9 19 22 12 41 15 27 28 29 30 44 8 28 25 9 7 8 8 20 30 11 7 37 8 20 14 12 8 27 15 9 30 63 50 36 46 57 70 57 58 60 70 50 40 6 0 5 0 48 67 65 67 52 62 26 35 35 39 61 65 67 69 55 62 50 59 46 59 67 59 22 3 4 2 22 11 25 48 7 59 2 32 36 1 0 0 27 26 6 22 24 11 6 48 5 25 23 8 23 9 34 2 0 35 37 14 20 7 20 24 61 56 47 51 54 55 41 24 43 52 56 40 43 42 38 53 55 52 58 52 59 54 56 44 21 34 37 51 36 41 47 47 44 51 31 40 32 50 53 47 51 47 51 49 45 35 19 38 21 36 45 26 33 3 35 52 45 10 41 46 45 36 52 37 25 28 17 36 27 36 37 27 30 41 20 20 28 30 20 32 38 32 24 7 23 18 16 24 16 29 23 17 18 20 23 31 31 7 27 29 26 25 33 23 22 21 22 23 17 23 27 24 25 24 15 17 19 20 26 28 6 24 68 63 57 69 54 78 76 67 72 68 56 36 67 5 0 4 0 81 82 81 81 76 37 35 37 43 60 69 68 76 67 63 50 56 56 63 68 67 6 26 68 50 45 54 59 80 64 70 63 74 61 43 64 6 0 5 0 63 83 81 84 78 32 38 41 48 57 71 74 77 57 69 56 58 42 56 58 56 6 25 63 58 45 61 55 78 71 63 68 73 54 36 61 4 0 5 0 72 79 80 77 74 27 32 33 39 56 72 69 74 65 64 50 55 49 58 65 64 12 31 67 63 48 64 57 77 73 64 74 73 53 36 68 9 0 6 0 74 72 78 72 69 36 39 38 44 64 75 70 77 70 67 56 58 58 64 70 70 7 28 60 56 47 58 51 74 68 66 69 70 54 38 64 6 0 4 0 71 81 79 78 73 32 36 37 40 55 70 68 74 63 62 52 55 48 58 67 62 6 24 65 52 44 59 64 72 57 64 59 70 61 38 51 4 0 3 0 56 77 70 73 74 26 35 28 41 54 60 71 58 52 65 50 50 38 43 57 42 26 13 19 6 10 4 6 5 3 35 15 8 25 5 10 39 21 18 27 15 20 7 4 28 17 73 76 73 8 4 4 11 4 4 4 4 4 5 5 13 17 7 9 3 7 12 6 3 2 36 10 5 28 3 6 35 17 14 21 8 13 4 2 15 10 72 63 72 15 2 2 7 2 3 2 12 2 4 3 7 12 13 16 9 9 8 8 9 7 29 14 10 19 10 13 25 17 13 16 14 20 10 9 18 14 52 39 61 10 10 7 11 9 9 9 7 8 10 9 12 17 11 14 9 7 4 5 4 3 27 11 6 21 6 8 33 19 17 25 14 14 7 5 19 16 56 58 60 6 12 4 10 5 5 5 5 5 7 12 9 9 23 62 50 37 47 57 70 58 63 66 67 55 33 56 6 1 4 0 58 66 63 69 62 66 31 35 31 41 72 80 71 69 67 52 62 56 61 67 61 12 35 64 59 40 56 54 71 67 62 68 70 52 41 63 7 0 8 1 64 70 70 72 67 65 39 41 39 40 65 73 75 77 70 56 60 58 63 68 67 10 31 64 52 39 52 58 72 63 60 71 71 53 40 64 5 0 6 1 63 70 68 72 68 68 38 39 38 38 75 77 74 74 73 58 65 59 65 71 69 11 33 68 64 43 58 63 76 69 65 74 75 51 44 70 8 0 7 1 70 69 72 77 70 70 40 36 43 42 71 78 73 75 69 52 60 59 69 71 73 11 34 62 58 39 55 56 75 64 67 69 69 55 39 63 12 0 6 1 70 74 68 74 72 67 38 38 37 37 67 79 70 73 67 56 62 61 67 68 69 14 34 66 63 49 58 64 73 67 59 69 70 56 38 65 11 1 7 1 66 73 74 75 71 73 37 40 42 45 67 73 77 72 72 57 65 61 65 70 68 20 42 77 70 58 69 68 79 71 63 72 77 51 51 75 4 0 7 0 71 76 78 81 73 77 48 54 48 52 76 77 78 78 75 73 73 64 69 66 69 12 31 66 59 42 57 62 73 60 60 71 74 54 38 68 6 0 5 1 65 67 68 71 69 67 40 41 38 44 70 74 74 73 71 70 57 70 71 71 69 12 29 59 54 49 53 52 64 61 51 64 63 44 29 59 6 0 9 2 57 64 61 63 62 62 32 35 30 35 62 63 65 63 66 62 53 64 65 65 61 10 27 62 59 37 52 61 73 63 60 71 73 53 39 68 5 0 6 1 66 66 64 70 65 61 33 40 42 31 66 68 66 72 67 60 55 69 67 75 73 11 33 70 62 44 56 65 75 64 63 74 76 55 40 71 7 1 5 1 67 67 71 75 69 65 39 41 44 40 71 74 73 76 73 69 58 70 66 77 74 9 35 65 56 35 51 62 76 56 64 70 73 49 37 70 8 1 7 4 63 64 73 71 71 70 38 36 36 36 64 68 69 69 65 67 51 64 58 63 69 Figure 5: The direct transfer performance of a source NER model trained in a high-resource setting applied on the other 40 target languages, and evaluated in terms of phrase-level F1. The languages are roughly sorted by language family. Slavic languages in Cyrillic script are from bg to uk, and those in Latin script are from bs to sl. 164 Supervised Unsupervised #train(k) #test(k) BiDic.P@1 HSup LSup RaRe t1 RaRe t10 RaRe all BEAent sup t10 RaRe uns BWET BEAent uns×2 t10 BEAent uns BEAtok uns MVtok Oracle af 5 1 36 84 59 73 79 79 80 76 64 79 79 74 75 80 ar 20 10 46 88 64 71 74 74 65 26 19 54 45 54 12 56 bg 20 10 55 90 61 80 81 81 81 5 51 81 65 54 4 76 bn 10 1 1 95 70 68 74 74 69 65 36 67 66 60 56 63 bs 15 1 30 92 63 80 79 80 78 76 52 80 78 77 69 82 ca 20 10 70 91 62 82 86 84 86 80 62 85 80 79 72 83 cs 20 10 64 90 62 77 78 75 78 73 59 77 75 72 71 78 da 20 10 68 90 62 77 81 81 82 79 68 83 82 79 78 80 de 20 10 73 86 58 73 74 73 72 69 63 72 71 64 68 70 el 20 10 55 89 61 67 67 67 54 13 45 49 43 34 13 45 en 20 10 — 81 47 64 65 64 65 58 — 63 61 57 56 61 es 20 10 83 90 63 83 84 84 85 76 62 85 81 76 73 84 et 15 10 41 90 64 73 77 77 78 72 58 78 78 71 73 75 fa 20 10 33 93 74 78 81 79 69 30 16 65 50 52 15 60 fi 20 10 58 89 67 78 80 80 81 76 68 81 80 69 77 78 fr 20 10 82 88 57 81 81 80 84 75 59 83 79 73 71 80 he 20 10 52 85 53 61 61 60 55 40 26 54 54 46 34 50 hi 5 1 29 85 68 64 74 73 68 48 27 64 61 58 35 54 hr 20 10 48 89 61 74 79 78 80 76 49 80 79 77 73 78 hu 20 10 64 90 59 75 79 78 80 71 55 79 79 69 73 76 id 20 10 68 91 67 82 83 81 75 59 62 73 67 61 62 79 it 20 10 77 89 60 80 81 80 82 75 59 81 78 76 72 79 lt 10 10 26 86 62 72 79 80 79 76 48 80 80 75 77 74 lv 10 10 31 91 68 70 75 75 69 68 40 69 69 67 65 66 mk 10 1 50 91 67 79 82 81 80 4 38 79 66 48 3 75 ms 20 1 48 91 66 78 80 78 74 69 62 68 67 63 68 74 nl 20 10 76 89 59 78 80 80 81 77 63 82 81 78 76 79 no 20 10 67 90 65 79 82 81 83 79 59 83 83 77 79 79 pl 20 10 66 89 61 76 79 78 81 73 63 82 80 77 76 78 pt 20 10 80 90 59 79 81 80 82 77 65 82 77 74 70 82 ro 20 10 67 92 66 80 82 82 80 76 46 78 76 74 67 77 ru 20 10 59 86 53 73 71 71 56 10 38 53 40 36 11 61 sk 20 10 52 91 62 76 79 79 80 74 50 79 76 76 71 79 sl 15 10 47 92 64 76 80 80 79 76 58 79 78 76 73 78 sq 5 1 37 88 69 79 84 84 83 82 59 83 84 76 79 79 sv 20 10 61 93 69 83 83 84 82 77 60 79 80 69 76 84 ta 15 1 7 84 54 44 53 53 46 35 12 39 42 25 29 38 tl 10 1 20 93 66 75 82 80 78 65 60 62 60 57 52 76 tr 20 10 61 90 61 75 77 77 77 70 53 77 76 67 67 71 uk 20 10 45 89 60 70 78 79 70 5 35 64 58 49 6 60 vi 20 10 54 88 55 64 72 72 61 58 53 56 55 48 47 56 µ — — — 89.2 62.1 74.3 77.4 76.9 74.8 60.2 50.5 72.8 69.7 64.5 56.7 71.6 σ — — — 2.8 5.2 7.3 6.4 6.4 9.6 24.1 14.7 11.5 12.6 13.7 25 11.5 Table 4: The size of training and test sets (development set size equals test set size) in thousand sentences, and the precision at 1 for Bilingual dictionaries induced from mapping languages to the English embedding space (using identical characters) is shown (BiDic.P@1). F1 scores on the test set, comparing baseline supervised models (HSup, LSup), multilingual transfer from top k source languages (RaRe, 5 runs, k = 1, 10, 40), an unsupervised RaRe with uniform expertise and no fine-tuning (RaRe uns), and aggregation methods: majority voting (MVtok), BEAtok uns and BEAent uns (Bayesian aggregation in token- and entity-level), and the oracle single best annotation (Oracle). We also compare with BWET (Xie et al., 2018), an unsupervised transfer model with stateof-the-art on CoNLL NER datasets. The mean and standard deviation over all 41 languages, µ, σ, are also reported.
2019
15
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1549–1559 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1549 Towards Scalable and Reliable Capsule Networks for Challenging NLP Applications Wei Zhao†, Haiyun Peng‡, Steffen Eger†, Erik Cambria‡ and Min YangΦ † Computer Science Department, Technische Universit¨at Darmstadt, Germany ‡ School of Computer Science and Engineering, Nanyang Technological University, Singapore Φ Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China www.aiphes.tu-darmstadt.de Abstract Obstacles hindering the development of capsule networks for challenging NLP applications include poor scalability to large output spaces and less reliable routing processes. In this paper, we introduce (i) an agreement score to evaluate the performance of routing processes at instance level; (ii) an adaptive optimizer to enhance the reliability of routing; (iii) capsule compression and partial routing to improve the scalability of capsule networks. We validate our approach on two NLP tasks, namely: multi-label text classification and question answering. Experimental results show that our approach considerably improves over strong competitors on both tasks. In addition, we gain the best results in low-resource settings with few training instances.1 1 Introduction In recent years, deep neural networks have achieved outstanding success in natural language processing (NLP), computer vision and speech recognition. However, these deep models are datahungry and generalize poorly from small datasets, very much unlike humans (Lake et al., 2015). This is an important issue in NLP since sentences with different surface forms can convey the same meaning (paraphrases) and not all of them can be enumerated in the training set. For example, Peter did not accept the offer and Peter turned down the offer are semantically equivalent, but use different surface realizations. In image classification, progress on the generalization ability of deep networks has been made by capsule networks (Sabour et al., 2017; Hinton et al., 2018). They are capable of generalizing to the same object in different 3D images with various viewpoints. 1Our code is publicly available at http://bit.ly/311Dcod Jerry completed his project. Jerry managed to finish his project. Jerry succeeded in finishing his project. Extrapolate Extrapolated sentences Unseen sentences Observed sentences Extrapolate operation Extrapolation regime Jerry is sleeping. Figure 1: The extrapolation regime for an observed sentence can be found during training. Then, the unseen sentences in this regime may be generalized successfully. Such generalization capability can be learned from examples with few viewpoints by extrapolation (Hinton et al., 2011). This suggests that capsule networks can similarly abstract away from different surface realizations in NLP applications. Figure 1 illustrates this idea of how observed sentences in the training set are generalized to unseen sentences by extrapolation. In contrast, traditional neural networks require massive amounts of training samples for generalization. This is especially true in the case of convolutional neural networks (CNNs), where pooling operations wrongly discard positional information and do not consider hierarchical relationships between local features (Sabour et al., 2017). Figure 2: Outputs attend to a) active neurons found by pooling operations b) all neurons c) relevant capsules found in routing processes. 1550 Capsule networks, instead, have the potential for learning hierarchical relationships between consecutive layers by using routing processes without parameters, which are clusteringlike methods (Sabour et al., 2017) and additionally improve the generalization capability. We contrast such routing processes with pooling and fully connected layers in Figure 2. Despite some recent success in NLP tasks (Wang et al., 2018; Xia et al., 2018; Xiao et al., 2018; Zhang et al., 2018a; Zhao et al., 2018), a few important obstacles still hinder the development of capsule networks for mature NLP applications. For example, selecting the number of iterations is crucial for routing processes, because they iteratively route low-level capsules to high-level capsules in order to learn hierarchical relationships between layers. However, existing routing algorithms use the same number of iterations for all examples, which is not reliable to judge the convergence of routing. As shown in Figure 3, a routing process with five iterations on all examples converges to a lower training loss at system level, but on instance level for one example, convergence has still not obtained. Additionally, training capsule networks is more difficult than traditional neural networks like CNN and long short-term memory (LSTM) due to the large number of capsules and potentially large output spaces, which requires extensive computational resources in the routing process. In this work, we address these issues via the following contributions: • We formulate routing processes as a proxy problem minimizing a total negative agreement score in order to evaluate how routing processes perform at instance level, which will be discussed more in depth later. • We introduce an adaptive optimizer to selfadjust the number of iterations for each example in order to improve instance-level convergence and enhance the reliability of routing processes. • We present capsule compression and partial routing to achieve better scalability of capsule networks on datasets with large output spaces. • Our framework outperforms strong baselines on multi-label text classification and question answering. We also demonstrate its superior generalization capability in low-resource settings. 0 200 400 600 800 1000 Training Step 0.00 0.01 0.02 0.03 0.04 0.05 Training Loss 5 iteration 3 iteration 1 iteration 0 5 10 15 20 25 Number of Iterations 10.300 10.325 10.350 10.375 10.400 10.425 Negative Agreement Score Figure 3: left) System-level routing evaluation on all examples; right) Instance-level routing evaluation on one example. 2 NLP-Capsule Framework We have motivated the need for better capsule networks being capable of scaling to large output spaces and higher reliability for routing processes at instance level. We now build a unified capsule framework, which we call NLP-Capsule. It is shown in Figure 4 and described below. 2.1 Convolutional Layer We use a convolutional operation to extract features from documents by taking a sliding window over document embeddings. Let X ∈Rl×v be a matrix of stacked vdimensional word embeddings for an input document with l tokens. Furthermore, let W a ∈Rl×k be a convolutional filter with a width k. We apply this filter to a local region X⊺ i:i+k−1 ∈Rk×l to generate one feature: mi = f(W a ◦X⊺ i:i+k−1) where ◦denotes element-wise multiplication, and f is a nonlinear activation function (i.e., ReLU). For ease of exposition, we omit all bias terms. Then, we can collect all mi into one feature map (m1, . . . , m(v−k+1)/2) after sliding the filter over the current document. To increase the diversity of features extraction, we concatenate multiple feature maps extracted by three filters with different window sizes (2,4,8) and pass them to the primary capsule layer. 2.2 Primary Capsule Layer In this layer, we use a group-convolution operation to transform feature maps into primary capsules. As opposed to using a scalar for each element in the feature maps, capsules use a group of neurons to represent each element in the current layer, which has the potential for preserving more information. 1551 = d-dimension L Kernel size Conv Layer Compression 2 4 8 V Concat L x 2 1 x 1 PrimCap Layer Representation Layer Aggregation Layer Figure 4: An illustration of NLP-Capsule framework. Using 1×1 filters W b = {w1, ..., wd} ∈Rd, in total d groups are used to transform each scalar mi in feature maps to one capsule pi, a d- dimensional vector, denoted as: pi = g(pi1 ⊕pi2 ⊕· · · ⊕pid) ∈Rd where pij = mi · wj ∈R and ⊕is the concatenation operator. Furthermore, g is a non-linear function (i.e., squashing function). The length ||pi|| of each capsule pi indicates the probability of it being useful for the task at hand. Hence, a capsule’s length has to be constrained into the unit interval [0, 1] by the squashing function g: g(x) = ||x||2 1 + ||x||2 x ||x|| Capsule Compression One major issue in this layer is that the number of primary capsules becomes large in proportion to the size of the input documents, which requires extensive computational resources in routing processes (see Section 2.3). To mitigate this issue, we condense the large number of primary capsules into a smaller amount. In this way, we can merge similar capsules and remove outliers. Each condensed capsule ui is calculated by using a weighted sum over all primary capsules, denoted as: ˆui = X j bjpj ∈Rd where the parameter bj is learned by supervision. 2.3 Aggregation Layer Pooling is the simplest aggregation function routing condensed capsules into the subsequent layer, but it loses almost all information during aggregation. Alternatively, routing processes are introduced to iteratively route condensed capsules into the next layer for learning hierarchical relationships between two consecutive layers. We now describe this iterative routing algorithm. Let {u1, . . . , ˆum} and {v1, . . . , vn} be a set of condensed capsules in layer ℓand a set of high-level capsules in layer ℓ+1, respectively. The basic idea of routing is two-fold. First, we transform the condensed capsules into a collection of candidates ˆuj|1, . . . , ˆuj|m for the j-th high-level capsule in layer ℓ+ 1. Following Sabour et al. (2017), each element ˆuj|i is calculated by: ˆuj|i = W cui ∈Rd where W c is a linear transformation matrix. Then, we represent a high-level capsule vj by a weighted sum over those candidates, denoted as: vj = m X i=1 cij ˆuj|i where cij is a coupling coefficient iteratively updated by a clustering-like method. Our Routing As discussed earlier, routing algorithms like dynamic routing (Sabour et al., 2017) and EM routing (Hinton et al., 2018), which use the same number of iterations for all samples, perform well according to training loss at system level, but on instance level for individual examples, convergence has still not been reached. This increases the risk of unreliability for routing processes (see Figure 3). To evaluate the performance of routing processes at instance level, we formulate them as a proxy problem minimizing the negative agreement score (NAS) function: min c,v f(u) = − X i,j cij⟨vj, uj|i⟩ s.t. ∀i, j : cij > 0, X j cij = 1. 1552 The basic intuition behind this is to assign higher weights cij to one agreeable pair ⟨vj, uj|i⟩if the capsule vj and uj|i are close to each other such that the total agreement score P i,j cij⟨vj, uj|i⟩is maximized. However, the choice of NAS functions remains an open problem. Hinton et al. (2018) hypothesize that the agreeable pairs in NAS functions are from Gaussian distributions. Instead, we study NAS functions by introducing Kernel Density Estimation (KDE) since this yields a non-parametric density estimator requiring no assumptions that the agreeable pairs are drawn from parametric distributions. Here, we formulate the NAS function in a KDE form. min c,v f(u) = − X i,j cijk(d(vj, uj|i)) (1) where d is a distance metric with ℓ2 norm, and k is a Epanechnikov kernel function (Wand and Jones, 1994) with: k(x) = ( 1 −x x ∈[0, 1) 0 x ≥1 The solution we used for KDE is taking Mean Shift (Comaniciu and Meer, 2002) to minimize the NAS function f(u): ∇f(u) = X i,j cijk′(d(vj, uj|i))∂d(vj, uj|i) ∂v First, vτ+1 j can be updated while cτ+1 ij is fixed: vτ+1 j = P i,j cτ ijk′(d(vτ j , ˆuj|i))uj|i P i,j k′(d(vτ j , uj|i)) Then, cτ+1 ij can be updated using standard gradient descent: cτ+1 ij = cτ ij + α · k(d(vτ j , uj|i)) where α is the hyper-parameter to control step size. To address the issue of convergence not being reached at instance level, we present an adaptive optimizer to self-adjust the number of iterations for individual examples according to their negative agreement scores (see Algorithm 1). Following Zhao et al. (2018), we replace standard softmax with leaky-softmax, which decreases the strength of noisy capsules. Algorithm 1 Our Adaptive KDE Routing 1: procedure ROUTING(uj|i, ℓ) 2: Initialize ∀i, j : cij = 1/nℓ+1 3: while true do 4: foreach capsule i, j in layer ℓ, ℓ+ 1 do 5: cij ←leaky-softmax(cij) 6: foreach capsule j in layer ℓ+ 1 do 7: vj ← P i cijk′(d(vj,uj|i))ˆuj|i Pn i=1 k′(d(vi,uj|i)) 8: foreach capsule i, j in layer ℓ, ℓ+ 1 do 9: cij ←cij + α · k(d(vj, uj|i)) 10: foreach capsule j in layer ℓ+ 1 do 11: aj ←|vj| 12: NAS = log(P i,j cijk(d(vj, uj|i))) 13: if |NAS −Last NAS| < ϵ then 14: break 15: else 16: Last NAS ←NAS 17: return vj, aj 2.4 Representation Layer This is the top-level layer containing final capsules calculated by iteratively minimizing the NAS function (See Eq. 1), where the number of final capsules corresponds to the entire output space. Therefore, as long as the size of an output space goes to a large scale (thousands of labels), the computation of this function would become extremely expensive, which yields the bottleneck of scalability of capsule networks. Partial Routing As opposed to the entire output space on data sets, the sub-output space corresponding to individual examples is rather small, i.e., only few labels are assigned to one document in text classification, for example. As a consequence, it is redundant to route low-level capsules to the entire output space for each example in the training stage, which motivated us to present a partial routing algorithm with constrained output spaces, such that our NAS function is described as: min c,v − X i ( X j∈D+ cij⟨vj, uj|i⟩ +λ · X k∈D− cik⟨vk, uk|i⟩) where D+ and D−denote the sets of real (positive) and randomly selected (negative) outputs for each example, respectively. Both sets are 1553 far smaller than the entire output space. λ is the hyper-parameter to control aggregation scores from negative outputs. 3 Experiments The major focus of this work is to investigate the scalability of our approach on datasets with a large output space, and generalizability in low-resource settings with few training examples. Therefore, we validate our capsule-based approach on two specific NLP tasks: (i) multi-label text classification with a large label scale; (ii) question answering with a data imbalance issue. 3.1 Multi-label Text Classification Multi-label text classification task refers to assigning multiple relevant labels to each input document, while the entire label set might be extremely large. We use our approach to encode an input document and generate the final capsules corresponding to the number of labels in the representation layer. The length of final capsule for each label indicates the probability whether the document has this label. Dataset #Train/Test/Labels Avg-docs RCV1 23.1K/781.2K/103 729.67 EUR-Lex 15.4K/3.8K/3.9K 15.59 Table 1: Characteristics of the datasets. Each label of RCV1 has about 729.67 training examples, while each label of EUR-Lex has merely about 15.59 examples. Experimental Setup We conduct our experiments on two datasets selected from the extreme classification repository:2 a regular label scale dataset (RCV1), with 103 labels (Lewis et al., 2004), and a large label scale dataset (EUR-Lex), with 3,956 labels (Mencia and F¨urnkranz, 2008), described in Table 1. The intuition behind our datasets selection is that EUR-Lex, with 3,956 labels and 15.59 examples per label, fits well with our goal of investigating the scalability and generalizability of our approach. We contrast EUR-Lex with RCV1, a dataset with a regular label scale, and leave the study of datasets with extremely large labels, e.g., Wikipedia-500K with 501,069 labels, to future work. Baselines We compare our approach to the following baselines: non-deep learning approaches 2https://manikvarma.github.io using TF-IDF features of documents as inputs: FastXML (Prabhu and Varma, 2014), and PDSparse (Yen et al., 2016), deep learning approaches using raw text of documents as inputs: FastText (Joulin et al., 2016), Bow-CNN (Johnson and Zhang, 2014), CNN-Kim (Kim, 2014), XMLCNN (Liu et al., 2017)), and a capsule-based approach Cap-Zhao (Zhao et al., 2018). For evaluation, we use standard rank-based measures (Liu et al., 2017) such as Precision@k, and Normalized Discounted Cumulative Gain (NDCG@k). Implementation Details The word embeddings are initialized as 300-dimensional GloVe vectors (Pennington et al., 2014). In the convolutional layer, we use a convolution operation with three different window sizes (2,4,8) to extract features from input documents. Each feature is transformed into a primary capsule with 16 dimensions by a group-convolution operation. All capsules in the primary capsule layer are condensed into 256 capsules for RCV1 and 128 capsules for EUR-Lex by a capsule compression operation. To avoid routing low-level capsules to the entire label space in the inference stage, we use a CNN baseline (Kim, 2014) trained on the same dataset with our approach, to generate 200 candidate labels and take these labels as a constrained output space for each example. Experimental Results In Table 2, we can see a noticeable margin brought by our capsule-based approach over the strong baselines on EUR-Lex, and competitive results on RCV1. These results appear to indicate that our approach has superior generalization ability on datasets with fewer training examples, i.e., RCV1 has 729.67 examples per label while EUR-Lex has 15.59 examples. In contrast to the strongest baseline XML-CNN with 22.52M parameters and 0.08 seconds per batch, our approach has 14.06M parameters, and takes 0.25 seconds in an acceleration setting with capsule compression and partial routing, and 1.7 seconds without acceleration. This demonstrates that our approach provides competitive computational speed with fewer parameters compared to the competitors. Discussion on Generalization To further study the generalization capability of our approach, we vary the percentage of training examples from 100% to 50% on the entire training set, leading to the number of training examples per label de1554 Datasets Metrics FastXML PD-Sparse FastText Bow-CNN CNN-Kim XML-CNN Cap-Zhao NLP-Cap Impv RCV1 PREC@1 94.62 95.16 95.40 96.40 93.54 96.86 96.63 97.05 +0.20% PREC@3 78.40 79.46 79.96 81.17 76.15 81.11 81.02 81.27 +0.20% PREC@5 54.82 55.61 55.64 56.74 52.94 56.07 56.12 56.33 -0.72% NDCG@1 94.62 95.16 95.40 96.40 93.54 96.88 96.63 97.05 +0.20% NDCG@3 89.21 90.29 90.95 92.04 87.26 92.22 92.31 92.47 +0.17% NDCG@5 90.27 91.29 91.68 92.89 88.20 92.63 92.75 93.11 +0.52% EUR-Lex PREC@1 68.12 72.10 71.51 64.99 68.35 75.65 80.20 +6.01% PREC@3 57.93 57.74 60.37 51.68 54.45 61.81 65.48 +5.93% PREC@5 48.97 47.48 50.41 42.32 44.07 50.90 52.83 +3.79% NDCG@1 68.12 72.10 71.51 64.99 68.35 75.65 80.20 +6.01% NDCG@3 60.66 61.33 63.32 55.03 59.81 66.71 71.11 +6.59% NDCG@5 56.42 55.93 58.56 49.92 57.99 64.45 68.80 +6.75% Table 2: Comparisons of our NLP-Cap approach and baselines on two text classication benchmarks, where ’-’ denotes methods that failed to scale due to memory issues. 50% 70% 100% 0.40 0.45 0.50 0.55 0.60 0.65 PREC@3 XML-CNN NLP-Cap 50% 70% 100% 0.50 0.55 0.60 0.65 0.70 0.75 NDCG@3 XML-CNN NLP-Cap Figure 5: Performance on EUR-Lex by varying the percentage of training examples (X-axis). Method #Training PREC@1 PREC@3 PREC@5 XML-CNN 100% examples 75.65 61.81 50.90 NLP-Capsule 50% examples 73.69 56.62 44.36 60% examples 74.83 58.48 46.33 70% examples 77.26 60.90 47.73 80% examples 77.68 61.06 48.28 90% examples 79.45 63.95 50.90 100% examples 80.20 65.48 52.83 Method #Training NDCG@1 NDCG@3 NDCG@5 XML-CNN 100% examples 75.65 66.71 64.45 NLP-Capsule 50% examples 73.69 66.65 67.36 60% examples 74.83 67.87 68.62 70% examples 77.26 69.79 69.65 80% examples 77.67 69.43 69.27 90% examples 79.45 71.64 71.06 100% examples 80.21 71.11 68.80 Table 3: Experimental results on different fractions of training examples from 50% to 100% on EUR-Lex. creasing from 15.59 to 7.77. Figure 5 shows that our approach outperforms the strongest baseline XML-CNN with different fractions of the training examples. This finding agrees with our speculation on generalization: the distance between our approach and XML-CNN increases as fewer training data samples are available. In Table 3, we also find that our approach with 70% of training examples achieves about 5% improvement over XML-CNN with 100% of examples on 4 out of 6 metrics. Routing Comparison We compare our routing with (Sabour et al., 2017) and (Zhang et al., 2018b) on EUR-Lex dataset and observe that it performs best on all metrics (Table 4). We speculate that the improvement comes from enhanced reliability of routing processes at instance level. 3.2 Question Answering Question-Answering (QA) selection task refers to selecting the best answer from candidates to each question. For a question-answer pair (q, a), we use our capsule-based approach to generate two final capsules vq and va corresponding to the respective question and answer. The relevance score of question-answer pair can be defined as their cosine similarity: s(q, a) = cos(vq, va) = v⊺ qva ||vq|| · ||va|| Experiment Setup In Table 5, we conduct our experiments on the TREC QA dataset collected from TREC QA track 8-13 data (Wang et al., 2007). The intuition behind this dataset selection is that the cost of hiring human annotators to collect positive answers for individual questions can be prohibitive since positive answers can be conveyed in multiple different surface forms. Such issue arises particularly in TREC QA with only 12% Method PREC@1 PREC@3 PREC@5 XML-CNN 75.65 61.81 50.90 NLP-Capsule + Sabour‘s Routing 79.14 64.33 51.85 NLP-Capsule + Zhang‘s Routing 80.20 65.48 52.83 NLP-Capsule + Our Routing 80.62 65.61 53.66 Method NDCG@1 NDCG@3 NDCG@5 XML-CNN 75.65 66.71 64.45 NLP-Capsule + Sabour‘s Routing 79.14 70.13 67.02 NLP-Capsule + Zhang‘s Routing 80.20 71.11 68.80 NLP-Capsule + Our Routing 80.62 71.34 69.57 Table 4: Performance on EUR-Lex dataset with different routing process. 1555 Dataset #Questions #QA Pairs %Positive Train/Dev/Test 1229/82/100 53417/1148/1517 12% Table 5: Characteristic of TREC QA dataset. %Positive denotes the percentage of positive answers. positive answers. Therefore, we use this dataset to investigate the generalizability of our approach. Baselines We compare our approach to the following baselines: CNN + LR (Yu et al., 2014b) using unigrams and bigrams, CNN (Severyn and Moschitti, 2015) using additional bilinear similarity features, CNTN (Qiu and Huang, 2015) using neural tensor network, LSTM (Tay et al., 2017) using single and multi-layer, MV-LSTM (Wan et al., 2016), NTN-LSTM and HD-LSTM (Tay et al., 2017) using holographic dual LSTM and CapsuleZhao (Zhao et al., 2018) using capsule networks. For evaluation, we use standard measures (Wang et al., 2007) such as Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR). Implementation Details The word embeddings used for question answering pairs are initialized as 300-dimensional GloVe vectors. In the convolutional layer, we use a convolution operation with three different window sizes (3,4,5). All 16dimensional capsules in the primary capsule layer are condensed into 256 capsules by the capsule compression operation. Experimental Results and Discussions In Table 6, the best performance on MAP metric is achieved by our approach, which verifies the effectiveness of our model. We also observe that our approach exceeds traditional neural models like CNN, LSTM and NTN-LSTM by a noticeable margin. This finding also agrees with the observation Method MAP MRR CNN + LR (unigram) 54.70 63.29 CNN + LR (bigram) 56.93 66.13 CNN 66.91 68.80 CNTN 65.80 69.78 LSTM (1 layer) 62.04 66.85 LSTM 59.75 65.33 MV-LSTM 64.88 68.24 NTN-LSTM 63.40 67.72 HD-LSTM 67.44 75.11 Capsule-Zhao 73.63 70.12 NLP-Capsule 77.73 74.16 Table 6: Experimental results on TREC QA dataset. we found in multi-label classification: our approach has superior generalization capability in low-resource setting with few training examples. In contrast to the strongest baseline HD-LSTM with 34.51M and 0.03 seconds for one batch, our approach has 17.84M parameters and takes 0.06 seconds in an acceleration setting, and 0.12 seconds without acceleration. 4 Related Work 4.1 Multi-label Text Classification Multi-label text classification aims at assigning a document to a subset of labels whose label set might be extremely large. With increasing numbers of labels, issues of data sparsity and scalability arise. Several methods have been proposed for the large multi-label classification case. Tree-based models (Agrawal et al., 2013; Weston et al., 2013) induce a tree structure that recursively partitions the feature space with nonleaf nodes. Then, the restricted label spaces at leaf nodes are used for classification. Such a solution entails higher robustness because of a dynamic hyper-plane design and its computational efficiency. FastXML (Prabhu and Varma, 2014) is one such tree-based model, which learns a hierarchy of training instances and optimizes an NDCG-based objective function for nodes in the tree structure. Label embedding models (Balasubramanian and Lebanon, 2012; Chen and Lin, 2012; Cisse et al., 2013; Bi and Kwok, 2013; Ferng and Lin, 2011; Hsu et al., 2009; Ji et al., 2008; Kapoor et al., 2012; Lewis et al., 2004; Yu et al., 2014a) address the data sparsity issue with two steps: compression and decompression. The compression step learns a low-dimensional label embedding that is projected from original and highdimensional label space. When data instances are classified to these label embeddings, they will be projected back to the high-dimensional label space, which is the decompression step. Recent works came up with different compression or decompression techniques, e.g., SLEEC (Bhatia et al., 2015). Deep learning models: FastText (Joulin et al., 2016) uses averaged word embeddings to classify documents, which is computationally efficient but ignores word order. Various CNNs inspired by Kim (2014) explored MTC with dynamic pooling, such as Bow-CNN (Johnson and 1556 Zhang, 2014) and XML-CNN (Liu et al., 2017). Linear classifiers: PD-Sparse (Yen et al., 2016) introduces a Fully-Corrective Block-Coordinate Frank-Wolfe algorithm to address data sparsity. 4.2 Question and Answering State-of-the-art approaches to QA fall into two categories: IR-based and knowledge-based QA. IR-based QA firstly preprocesses the question and employ information retrieval techniques to retrieve a list of relevant passages to questions. Next, reading comprehension techniques are adopted to extract answers within the span of retrieved text. For answer extraction, early methods manually designed patterns to get them (Pasca). A recent popular trend is neural answer extraction. Various neural network models are employed to represent questions (Severyn and Moschitti, 2015; Wang and Nyberg, 2015). Since the attention mechanism naturally explores relevancy, it has been widely used in QA models to relate the question to candidate answers (Tan et al., 2016; Santos et al., 2016; Sha et al., 2018). Moreover, some researchers leveraged external large-scale knowledge bases to assist answer selection (Savenkov and Agichtein, 2017; Shen et al., 2018; Deng et al., 2018). Knowledge-based QA conducts semantic parsing on questions and transforms parsing results into logical forms. Those forms are adopted to match answers from structured knowledge bases (Yao and Van Durme, 2014; Yih et al., 2015; Bordes et al., 2015; Yin et al., 2016; Hao et al., 2017). Recent developments focused on modeling the interaction between question and answer pairs: Tensor layers (Qiu and Huang, 2015; Wan et al., 2016) and holographic composition (Tay et al., 2017) have pushed the state-of-the-art. 4.3 Capsule Networks Capsule networks were initially proposed by Hinton (Hinton et al., 2011) to improve representations learned by neural networks against vanilla CNNs. Subsequently, Sabour et al. (2017) replaced the scalar-output feature detectors of CNNs with vector-output capsules and max-pooling with routing-by-agreement. Hinton et al. (2018) then proposed a new iterative routing procedure between capsule layers based on the EM algorithm, which achieves better accuracy on the smallNORB dataset. Zhang et al. (2018a) applied capsule networks to relation extraction in a multi-instance multi-label learning framework. Xiao et al. (2018) explored capsule networks for multi-task learning. Xia et al. (2018) studied the zero-shot intent detection problem with capsule networks, which aims to detect emerging user intents in an unsupervised manner. Zhao et al. (2018) investigated capsule networks with dynamic routing for text classification, and transferred knowledge from the single-label to multi-label cases. Cho et al. (2019) studied capsule networks with determinantal point processes for extractive multi-document summarization. Our work is different from our predecessors in the following aspects: (i) we evaluate the performance of routing processes at instance level, and introduce an adaptive optimizer to enhance the reliability of routing processes; (ii) we present capsule compression and partial routing to achieve better scalability of capsule networks on datasets with a large output space. 5 Conclusion Making computers perform more like humans is a major issue in NLP and machine learning. This not only includes making them perform on similar levels (Hassan et al., 2018), but also requests them to be robust to adversarial examples (Eger et al., 2019) and generalize from few data points (R¨uckl´e et al., 2019). In this work, we have addressed the latter issue. In particular, we extended existing capsule networks into a new framework with advantages concerning scalability, reliability and generalizability. Our experimental results have demonstrated its effectiveness on two NLP tasks: multi-label text classification and question answering. Through our modifications and enhancements, we hope to have made capsule networks more suitable to large-scale problems and, hence, more mature for real-world applications. In the future, we plan to apply capsule networks to even more challenging NLP problems such as language modeling and text generation. 6 Acknowledgments We thank the anonymous reviewers for their comments, which greatly improved the final version of the paper. This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of In1557 formation from Heterogeneous Sources (AIPHES) at the Technische Universit¨at Darmstadt under grant No. GRK 1994/1. References Rahul Agrawal, Archit Gupta, Yashoteja Prabhu, and Manik Varma. 2013. Multi-label learning with millions of labels: Recommending advertiser bid phrases for web pages. In Proceedings of the 22nd international conference on World Wide Web, pages 13–24. ACM. Krishnakumar Balasubramanian and Guy Lebanon. 2012. The landmark selection method for multiple output prediction. arXiv preprint arXiv:1206.6479. Kush Bhatia, Himanshu Jain, Purushottam Kar, Manik Varma, and Prateek Jain. 2015. Sparse local embeddings for extreme multi-label classification. In Advances in Neural Information Processing Systems, pages 730–738. Wei Bi and James Kwok. 2013. Efficient multi-label classification with many labels. In International Conference on Machine Learning, pages 405–413. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075. Yao-Nan Chen and Hsuan-Tien Lin. 2012. Featureaware label space dimension reduction for multilabel classification. In Advances in Neural Information Processing Systems, pages 1529–1537. Sangwoo Cho, Logan Lebanoff, Hassan Foroosh, and Fei Liu. 2019. Improving the similarity measure of determinantal point processes for extractive multidocument summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Moustapha M Cisse, Nicolas Usunier, Thierry Artieres, and Patrick Gallinari. 2013. Robust bloom filters for large multilabel classification tasks. In Advances in Neural Information Processing Systems, pages 1851–1859. Dorin Comaniciu and Peter Meer. 2002. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on pattern analysis and machine intelligence, 24(5):603–619. Yang Deng, Ying Shen, Min Yang, Yaliang Li, Nan Du, Wei Fan, and Kai Lei. 2018. Knowledge as a bridge: Improving cross-domain answer selection with external knowledge. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3295–3305. Steffen Eger, G¨ozde G¨ul S¸ahin, Andreas R¨uckl´e, JiUng Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, and Iryna Gurevych. 2019. Text processing like humans do: Visually attacking and shielding nlp systems. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. C-S Ferng and H-T Lin. 2011. Multi-label classification with error-correcting codes. In Asian Conference on Machine Learning, pages 281–295. Yanchao Hao, Yuanzhe Zhang, Kang Liu, Shizhu He, Zhanyi Liu, Hua Wu, and Jun Zhao. 2017. An endto-end model for question answering over knowledge base with cross-attention combining global knowledge. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 221–231. Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming Zhou. 2018. Achieving human parity on automatic chinese to english news translation. CoRR, abs/1803.05567. Geoffrey E Hinton, Alex Krizhevsky, and Sida D Wang. 2011. Transforming auto-encoders. In International Conference on Artificial Neural Networks, pages 44–51. Springer. Geoffrey E Hinton, Sara Sabour, and Nicholas Frosst. 2018. Matrix capsules with em routing. Daniel J Hsu, Sham M Kakade, John Langford, and Tong Zhang. 2009. Multi-label prediction via compressed sensing. In Advances in neural information processing systems, pages 772–780. Shuiwang Ji, Lei Tang, Shipeng Yu, and Jieping Ye. 2008. Extracting shared subspace for multi-label classification. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 381–389. ACM. Rie Johnson and Tong Zhang. 2014. Effective use of word order for text categorization with convolutional neural networks. arXiv preprint arXiv:1412.1058. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759. Ashish Kapoor, Raajay Viswanathan, and Prateek Jain. 2012. Multilabel classification using bayesian compressed sensing. In Advances in Neural Information Processing Systems, pages 2645–2653. Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. 1558 B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. 2015. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338. David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. Journal of machine learning research, 5(Apr):361–397. Jingzhou Liu, Wei-Cheng Chang, Yuexin Wu, and Yiming Yang. 2017. Deep learning for extreme multi-label text classification. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 115–124. ACM. Eneldo Loza Mencia and Johannes F¨urnkranz. 2008. Efficient pairwise multilabel classification for largescale problems in the legal domain. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 50–65. Springer. Marius Pasca. Open-Domain Question Answering from Large Text Collections, volume 29. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Yashoteja Prabhu and Manik Varma. 2014. Fastxml: A fast, accurate and stable tree-classifier for extreme multi-label learning. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 263–272. ACM. Xipeng Qiu and Xuanjing Huang. 2015. Convolutional neural tensor network architecture for communitybased question answering. In Twenty-Fourth International Joint Conference on Artificial Intelligence. Andreas R¨uckl´e, Nafise Sadat Moosavi, and Iryna Gurevych. 2019. Coala: A neural coverage-based approach for long answer selection with small data. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19). Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. 2017. Dynamic routing between capsules. In Advances in Neural Information Processing Systems, pages 3856–3866. Cicero dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. 2016. Attentive pooling networks. arXiv preprint arXiv:1602.03609. Denis Savenkov and Eugene Agichtein. 2017. Evinets: Neural networks for combining evidence signals for factoid question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 299–304. Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 373– 382. ACM. Lei Sha, Xiaodong Zhang, Feng Qian, Baobao Chang, and Zhifang Sui. 2018. A multi-view fusion neural network for answer selection. In Thirty-Second AAAI Conference on Artificial Intelligence. Ying Shen, Yang Deng, Min Yang, Yaliang Li, Nan Du, Wei Fan, and Kai Lei. 2018. Knowledge-aware attentive neural network for ranking question answer pairs. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 901–904. ACM. Ming Tan, Cicero Dos Santos, Bing Xiang, and Bowen Zhou. 2016. Improved representation learning for question answer matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 464–473. Yi Tay, Minh C Phan, Luu Anh Tuan, and Siu Cheung Hui. 2017. Learning to rank question answer pairs with holographic dual lstm architecture. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 695–704. ACM. Shengxian Wan, Yanyan Lan, Jiafeng Guo, Jun Xu, Liang Pang, and Xueqi Cheng. 2016. A deep architecture for semantic matching with multiple positional sentence representations. In Thirtieth AAAI Conference on Artificial Intelligence. Matt P Wand and M Chris Jones. 1994. Kernel smoothing. Chapman and Hall/CRC. Di Wang and Eric Nyberg. 2015. A long short-term memory model for answer sentence selection in question answering. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 707–712. Mengqiu Wang, Noah A Smith, and Teruko Mitamura. 2007. What is the jeopardy model? a quasisynchronous grammar for qa. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Mingxuan Wang, Jun Xie, Zhixing Tan, Jinsong Su, et al. 2018. Towards linear time neural machine translation with capsule networks. arXiv preprint arXiv:1811.00287. Jason Weston, Ameesh Makadia, and Hector Yee. 2013. Label partitioning for sublinear ranking. In International Conference on Machine Learning, pages 181–189. 1559 Congying Xia, Chenwei Zhang, Xiaohui Yan, Yi Chang, and Philip S Yu. 2018. Zero-shot user intent detection via capsule neural networks. arXiv preprint arXiv:1809.00385. Liqiang Xiao, Honglun Zhang, Wenqing Chen, Yongkun Wang, and Yaohui Jin. 2018. Mcapsnet: Capsule network for text with multi-task learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4565–4574. Xuchen Yao and Benjamin Van Durme. 2014. Information extraction over structured data: Question answering with freebase. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 956–966. Ian En-Hsu Yen, Xiangru Huang, Pradeep Ravikumar, Kai Zhong, and Inderjit Dhillon. 2016. Pdsparse: A primal and dual sparse approach to extreme multiclass and multilabel classification. In International Conference on Machine Learning, pages 3069–3077. Scott Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. Wenpeng Yin, Mo Yu, Bing Xiang, Bowen Zhou, and Hinrich Sch¨utze. 2016. Simple question answering by attentive convolutional neural network. arXiv preprint arXiv:1606.03391. Hsiang-Fu Yu, Prateek Jain, Purushottam Kar, and Inderjit Dhillon. 2014a. Large-scale multi-label learning with missing labels. In International conference on machine learning, pages 593–601. Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014b. Deep learning for answer sentence selection. arXiv preprint arXiv:1412.1632. Ningyu Zhang, Shumin Deng, Zhanling Sun, Xi Chen, Wei Zhang, and Huajun Chen. 2018a. Attentionbased capsule network with dynamic routing for relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 986–992. Suofei Zhang, Wei Zhao, Xiaofu Wu, and Quan Zhou. 2018b. Fast dynamic routing based on weighted kernel density estimation. arXiv preprint arXiv:1805.10807. Wei Zhao, Jianbo Ye, Min Yang, Zeyang Lei, Suofei Zhang, and Zhou Zhao. 2018. Investigating capsule networks with dynamic routing for text classification. In Proceedings of the 2018 conference on empirical methods in natural language processing (EMNLP), pages 3110–3119.
2019
150
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1560–1568 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1560 Soft Representation Learning for Sparse Transfer Haeju Park1 Jinyoung Yeo2 Gengyu Wang1 Seung-won Hwang1 Department of Computer Science, Yonsei University, Seoul, Korea1 T-Brain, AI Center, SK Telecom, Seoul, Korea2 {phj0225, posuer, seungwonh}@yonsei.ac.kr [email protected] Abstract Transfer learning is effective for improving the performance of tasks that are related, and Multi-task learning (MTL) and Cross-lingual learning (CLL) are important instances. This paper argues that hard-parameter sharing, of hard-coding layers shared across different tasks or languages, cannot generalize well, when sharing with a loosely related task. Such case, which we call sparse transfer, might actually hurt performance, a phenomenon known as negative transfer. Our contribution is using adversarial training across tasks, to “softcode” shared and private spaces, to avoid the shared space gets too sparse. In CLL, our proposed architecture considers another challenge of dealing with low-quality input. 1 Introduction Transfer learning in neural networks has been applied in recent years to improving the performance of related tasks, for example, 1) multi-task learning (MTL) with different tasks (labeled data available all tasks) and 2) cross-lingual learning (CLL) with different language (but the same task) though labeled data available only in source language. For both settings, one of their most common strategies is hard-parameter sharing, as shown in Figure 1a, which shares the hidden layers across tasks, which we will call shared layer. This approach works well when tasks are closely related, when most features are invariant across tasks. Otherwise, which we call sparse transfer, transferring between loosely related tasks often hurt performance, known as negative transfer. We elaborate this problem in MTL and CLL scenarios. First, for MTL, the shared space is reported to be sparse, in an architecture with one shared encoder (Sachan and Neubig, 2018), when shared by K (e.g., K > 2 tasks) loosely-related tasks. To address this problem, as shown in Figure 1b, recent models (Liu et al., 2017; Lin et al., 2018) divide the features of different tasks into task-invariant and task-dependent latent spaces, which we will call shared and private spaces from this point on. However, since such approach still hard-codes shared and private features, deciding which subsets of tasks should share encoders in many-task settings, among all possible combinations of tasks, is a non-trivial design problem (Ruder et al., 2019; Sanh et al., 2019). Second, for CLL, the given task in source language (with rich resources) transfers to that for target languages without training resources. For the latter, machine-translated resources are fed instead, to the shared encoder (Schwenk and Douze, 2017; Conneau et al., 2018). When translation is perfect, the shared space would be dense: For example, English training pair with entailment relationship, “Because it looked so formidable” and “It really did look wonderful” can be translated to Chinese sentences of the same meaning, to preserve labels. Meanwhile, its translation into “因为 它看起来那么可怕” (Because it looks so scary) and “它真的看起来很棒” (It really looks great), fails to preserve the entailment relationship, and makes the shared space sparse. As a unified solution for both problems, we propose soft-coding approaches that can adapt in the following novel ways. First, for MTL, we propose Task-Adaptive Representation learning using Soft-coding, namely TARS, wherein shared and private features are both mixtures of features. Specifically, as shown in Figure 1c, TARS begins as a generic sharing framework using one common shared encoder, but also adopts its paired task-specific layers to feed a Mixture-of-Experts (MoE) module (Shazeer et al., 2017; Guo et al., 2018) which captures soft-private features with a weighted combination of all task-dependent features, where 1561 ݔ௞ ݔ௠ ܮ௧௔௦௞ ௞ ܮ௧௔௦௞ ௠ Shared softmax softmax (a) Fully-Shared Model (FS) ݔ௞ ݔ௠ ܮ௧௔௦௞ ௞ ܮ௧௔௦௞ ௠ ൅ ൅ Private Shared Private softmax softmax (b) Shared-Private Model (SP) ݔ௞ ݔ௠ ܮ௧௔௦௞ ௞ ܮ௧௔௦௞ ௠ ൅ ൅ G Private Private Shared ȭ softmax softmax (c) TARS ݔ௞ ݔ௠ ܮ௧௔௦௞ ௞ ܮ௧௔௦௞ ௠ Shared softmax softmax P P Refiner (d) CASE Figure 1: Illustration of transfer learning architectures. Yellow and green boxes represent shared and private LSTM layers. G and P indicates a gating network and a policy network respectively. a gating network G in Figure 1c, decides on output weights for each task. Based on this basic architecture, TARS softly-shares features balanced by two conflicting auxiliary losses: one is used to eliminate private features from the shared space, which decreases the generalization across task, while the other is used to keep shared space “dense” with soft-private features, which is a form of adversarial training. Such balancing efforts prevent the shared space from being too sparse to be generalized for every task, even when K > 2. Second, for CLL, we propose a Cross-lingual AdverSarial Example, namely CASE. Compared to Figure 1c, task-specific private layers no longer exist in Figure 1d, because CLL deals with a single task for multiple languages. Instead, for an additional challenge of refining low-quality input, we add Refiner. Specifically, once the source language is translated into the target language, CASE moves the noisy representation on the target side towards a direction of space on the source side back in a form of adversarial example, and uses this as an additional training data to task classifier. However, this refinement may have adverse effects (Yeo et al., 2018), for which a policy network P in Figure 1d decides whether to refine or not. To demonstrate the effectiveness and flexibility of our soft-coding approaches, we evaluate TARS on five different datasets covering diverse scenarios and CASE on cross-lingual natural language inference (XNLI) datasets with 15 languages (including low-resource language such as Swahili and Urdu), and show that TARS and CASE outperform existing hard-coding approaches. 2 Preliminaries 2.1 Problem Statement Formally, we assume the existence of K datasets {Dk}K k=1, where each Dk contains |Dk| data samples for classification task k. Specifically, Dk = {(xk i , yk i )}|Dk| i=1 (1) where xk i and yk i denote a sentence (or pair) and its corresponding label for task k. In CLL, Dk is given only for one language, for which we create a new dataset ˜D k = {(˜xk i , yk i )}, where ˜xk i is translated, using neural machine translation (NMT), for training task k (for another language). Transfer learning aims to improve classification by learning these K tasks in parallel. Thus, our objective is to learn a sentence (or pair) representation xk per task k, but take into account the correlation among related tasks. Specifically, given an input sequence xk = {wk 1, wk 2, ..., wk T } with length T, we aim to learn a sentence representation xk for the entire sequence as follows, xk = Encoder({wk 1, wk 2, ..., wk T }). Following (Conneau et al., 2017), the final output representation xk is ultimately fed into a corresponding classifier which consists of multiple fully connected layers culminating in a softmax layer, i.e., ˆyk = softmax(Wkxk + bk). The parameters of the network are trained to minimize the loss Ltask of the predicted and true distribu1562 tions on all the tasks as follows: Ltask = K X k=1 L(ˆyk, yk) (2) where L(ˆyk, yk) denote a typical cross-entropy loss for each task k. 2.2 Baseline: Hard-code Approach As overviewed in Section 1, the success of transfer learning depends on the sharing scheme in latent feature space. Existing architectures differ in how to group the shared features to maximize sharing, as illustrated in Figure 1. We overview the existing approaches into the following two categories. Base I: Fully-Shared Model (FS) As shown in Figure 1a, the Fully-Shared (FS) model adopts a single shared encoder S-Encoder to extract features generalized for all the tasks. For example, given two tasks k and m, all features sk of task k are expected to be shared by task m and vice versa, i.e., sk = S-Encoder({wk 1, wk 2, ..., wk T }; θs), where θs represents the parameters of the shared encoder. In FS model, sk is equivalent to xk fed into classifiers. Base II: Shared-Private Model (SP) As Figure 1b shows, the Shared-Private (SP) model consists of two modules: (1) the underlying shared encoder S-Encoder responsible to capture taskinvariant features, and (2) the private encoder P-Encoder to extract task-dependent features, i.e., pk = P-Encoder({wk 1, wk 2, ..., wk T }; θk p), where θk p represents the parameters of each private encoder. Then, both shared representation sk and private representation pk are concatenated to construct the final sentence representation: xk = sk ⊕pk. These hard-code approaches greatly reduce the risk of overfitting to capture all of the tasks simultaneously, but have the caveat that the ability of shared space to model task-invariant features can be significantly reduced (Sachan and Neubig, 2018). We empirically show our observations are consistent in Section 5.2. 3 Soft-code Approach for MTL: TARS Inspired by the limitation of hard-coding approaches, our proposed model, TARS, begins with FS model but progressively adapts to task characteristics, as shown in Figure 1c. Soft-Private Module TARS first models the multiple tasks as MoE, where each task has an individual expert network, and weighs the experts for different task examples. To be specific, TARS feeds the shared features sk into individual P-Encoder for each task, to encode task-dependent features as follows: pk = P-Encoder(sk; θk p) (3) Simultaneously, a gating network decides on output weights for each expert (i.e., individual P-Encoder). Specifically, the gating network G, parameterized by θg, is used to map the shared representation of current task into the correct expert, and each expert is thus learning task-dependent features for that task, estimating task label of sk: G(sk; θg) = softmax(Wgsk + bg) (4) where Wg and bg is a trainable weight matrix and a bias, respectively. Based on above, the final softprivate representation p(sk) is a mixture of all expert outputs with respect to sk as the following: p(sk) = K X k=1 G(sk; θg) · pk (5) Soft-Shared Module In order to learn taskinvariant features, inspired by (Liu et al., 2017), TARS adopts an adversarial network, which contains a feature extractor and a task discriminator D. The basic idea is to learn features that cannot be distinguished by D. Specifically, D aims to discriminate which task the feature comes from, while the feature extractor (e.g., S-Encoder) tries to fool D so that it cannot identify the task of the feature and is hence task-invariant. More formally, Ladv = min θs λ max θd K X k=1 |Dk| X i=1 dk i log[D(sk; θd)] (6) where dk i is the ground-truth task label, θd is the parameter of task discriminator D, and λ is a hyperparameter. As mentioned before, such adversarial learning has been verified to be very effective for extracting task-invariant features. However, trying to keep the shared space too pure inevitably leads to sparseness, for which we additionally introduce the density constraint Ldense for this purpose. 1563 Specifically, the objective of the density constraint Ldense is to push the soft-private features from the private embeddings closer to the shared ones, such that the shared space is encouraged to being dense rather than being too sparse, resolving the sparseness of the shared space. Therefore, the soft-shared features might be more informative in this case. Formally, Ldense = K X k=1 ||p(sk) −sk||2 (7) where || · ||2 is the mean squared L-2 norm. Training and Inference Lastly, the soft-private and soft-shared representations p(sk) and sk are concatenated, i.e., xk = sk ⊕p(sk), to feed the all networks in TARS with the following loss: LTARS = Ltask + Ladv + Ldense (8) TARS is trained with backpropagation, and adopts a gradient reversal layer (Ganin and Lempitsky, 2015) to address minimax optimization problem. Note that, unlike hard-code approaches, zero-shot learning is also possible since TARS can adapt to a new target task (e.g., cross-domain or -lingual), by aligning it with the trained expert gate deciding what combination of the expert to use in Eq. (4) and Eq. (5) on inference. 4 Soft-code Approach for CLL: CASE This section revises Ldense in Eq. (7) for CLL scenario. Note that, in CLL, sparse space corresponds to mistranslated low-resource language, which we call pseudo-sentence. The goal of Ldense is thus replaced by, softly correcting the representation to align better Lalign, while preserving the semantics Lsem. For that purpose, we propose a Refiner replacing Ldense with these two new losses. Refinement by Perturbation We first discuss how to refine pseudo-sentences by perturbation ∆ for higher learning effectiveness. Related ideas are ensuring the robustness of a model, by finding ∆that changes a prediction, or, f(x) = y while f(x+∆) ̸= y (Goodfellow et al., 2015). Inspired, CASE explores if incorrect translations that may cause wrong predictions in the target language can be moved back to change predictions. For which, based on the basic architecture of variational auto-encoder (VAE) (Kingma and Welling, 2013), CASE models a neural refiner to refine low-quality representations. Specifically, as shown in Figure 1d, CASE first encodes pseudoparallel sentences into shared space, e.g., (x, ˜x). Then, the refiner which consists of two encoding feed-forward network µ(x) and σ(x) converts the representations into two distribution variables µ(˜x) and σ(˜x), the mean and standard deviation for pseudo representations. Unlike traditional VAE minimizing the latent loss that measures how closely the latent variables match a unit Gaussian, i.e., KL(N(µ(x), σ(x)), N(0, 1)), CASE enhances the latent loss with the pseudo-parallel representation, to generate pseudo-adversarial example ˜z that roughly follows a representation x from resource-rich space as follows: Lalign = KL(N(µ(˜x), σ(˜x)), N(µ(x), σ(x))) (9) In order to optimize the KL divergence, CASE applies a simple reparameterization trick (Kingma and Welling, 2013). Using this trick, pseudoadversarial example ˜z is generated from the mean and standard deviation vectors, i.e., ˜z = µ(˜x) + σ(˜x) · ϵ, where ϵ ∈N(0, 1). This constraint not only allows us to generate an informative representation, but also improves the generalization of our network, towards x (e.g., English) with higher confidence. Then, CASE aims at preserving its original semantics in the latent space, for which CASE includes the reconstruction loss, which is a mean squared error, to measure how accurately the pseudo-adversarial example ˜z preserves its original semantics. i.e., Lsim = Σ| ˜D|||˜z −˜x||2. As a result, ˜z is fed into the classifier, and the overall loss of CASE is defined as follows: LCASE = Ltask + Ladv + Lalign + Lsim (10) Selective Refinement Lastly, CASE aims to refine only when the perturbation can refine the translation. In other words, if the translation is already good, CASE avoids a refinement, by parameterizing refinement with α set to be near zero. Not applying a refinement for correct translation is important, since more than half of translations is correctly translated, as reported by (Yeo et al., 2018), such that refinement may lower the quality. For computing α, CASE adapts a policy network P, which consists of a feed forward network P(x; θp) = softmax(Wpx + bp), to identify wrong translations by capturing the difference of domain distribution. Then, the policy is calculated 1564 as follows: α = KL(P(˜x)||P(x)) = X x∈D,˜x∈˜D P(˜x) log P(˜x) P(x) (11) in which P(x) outputs a domain distribution of x, and CASE estimates α as the difference between two distributions (i.e., KL divergence), and the final loss function is defined factoring in α: LCASE = Ltask + Ladv + α(Lalign + Lsim). 5 Experiments 5.1 Experimental Settings To show the effectiveness of our proposed approaches, we conduct experiments on both multitask and cross-lingual settings. Multi-task Dataset For Multi-task learning, we use five different datasets on Natural Language Inference (NLI) and Paraphrase Identification (PI) tasks: SNLI (Bowman et al., 2015), MNLI (Williams et al., 2018), and CNLI1, for single-domain-English, multi-domain-English, and Chinese NLI respectively; QQP (Csernai et al., 2017) and LCQMC (Liu et al., 2018) for English and Chinese PI. Cross-lingual Dataset We use the cross-lingual natural language inference (XNLI) dataset (Conneau et al., 2018)2 from 15 different languages for Cross-lingual learning. The dataset is a version of MNLI (Williams et al., 2018) where 2,500 dev and 5,000 test sets have been translated (by humans) into 14 languages. For training datasets, the English training data is translated into each target language by NMT. Implementation Details For all encoder, we adopt BiLSTM-max (Conneau et al., 2017) model and the pre-trained word embeddings we use are 300-dimensional fastText word embeddings (Bojanowski et al., 2017). Following (Conneau et al., 2018), the BiLSTM hidden states is set to 256 and Adam optimizer with a learning rate of 0.001 was applied. The learning rate was decreased by a factor of 0.85 when the target dev accuracy does not improve. As in (Conneau et al., 2018), for text classification networks, we use a feedforward neural network with one hidden layer of 128 hidden units with a dropout rate of 0.1, to 1https://github.com/blcunlp/CNLI 2https://www.nyu.edu/projects/bowman/xnli/XNLI1.0.zip Source Model SNLI MNLI QQP (Single Task) BiLSTM-max 81.95 65.98 85.89 SNLI+MNLI AFS 82.06 66.51 (+0.11) (+0.53) ASP 82.28 67.39 (+0.33) (+1.41) TARS 82.67 67.79 (+0.70) (+1.81) SNLI+QQP AFS 82.03 85.08 (+0.08) (-0.81) ASP 82.20 86.22 (+0.25) (+0.33) TARS 82.54 86.51 (+0.59) (+0.62) MNLI+QQP AFS 66.62 85.59 (+0.64) (-0.30) ASP 66.92 86.12 (+0.94) (+0.23) TARS 67.37 86.47 (+1.39) (+0.58) Table 1: Accuracy over MTL with two-source tasks measure the relatedness of a given premise and hypothesis. The hyperparameter λ is empirically set to 0.005. All our implementation is available at github.com/haejupark/soft. 5.2 Experimental Result I: MTL Using (Liu et al., 2017) as hard-code baselines, we apply Adversarial training (and so-called orthogonality constraints) to FS and SP models, namely AFS and ASP. Such techniques enhance the distinct nature of shared and private features. Two-source MTL Table 1 shows the performance on three text classification tasks. The first row shows the results of “single task”, and other rows show the results of “multiple tasks” by corresponding MTL models trained with two source tasks. More concretely, (SNLI+MNLI) and (*NLI+QQP) are for cross-domain and cross-task classification respectively. In this table, we can see that TARS achieves higher accuracy than all sharing scheme baselines in all scenarios, surpassing multi-task learning (i.e., ASP) as well as single task learning. These results show that our softcode approach also works well in typical MTL settings with two source tasks, though they are not our targeted sparse scenario. Three-source MTL In Table 2, MTL models use three source tasks (SNLI+MNLI+QQP), where the first row shows the results of “single task”. We first test SNLI, MNLI, and QQP as a supervised target task. From the results, we can see that TARS outperforms all baselines including MoE, which is a variant of TARS excluding the two auxiliary losses. We also include the recent work, 1565 Model SNLI MNLI QQP CNLI LCQMC BiLSTM-max 81.95 65.98 85.89 64.42 79.69 AFS 81.70 66.78 85.41 39.70 61.29 (-0.25) (+0.80) (-0.48) (-24.72) (-18.40) ASP 82.23 66.92 86.04 (+0.28) (+0.94) (+0.15) MoE 81.55 66.72 85.23 39.45 63.02 (-0.40) (+0.74) (-0.66) (-24.97) (-16.67) MMoE 81.46 67.01 85.29 (-0.49) (+1.03) (-0.60) TARS 83.12 68.24 86.15 40.52 63.45 (+1.17) (+2.26) (+0.26) (-23.90) (-16.24) Table 2: Accuracy of MTL with three-source tasks MMoE (Ma et al., 2018), which explicitly learns to model task relationship by modeling an expert for each task (which is not desirable for a new task). This suggests that the synergetic effect of soft-private and -shared modules in TARS is critical to outperform other baselines. Specifically, AFS and ASP show a “negative transfer”, which is an inherent challenge of MTL. For example, ASP with three-source tasks achieves 82.23% and 66.92% accuracy, respectively, in SNLI and MNLI, which are lower than 82.28% and 67.39% accuracy with its best performance with two-source tasks. In contrast, TARS overcomes such challenges, for example, 83.12% > 82.67% and 68.24% > 67.79% in SNLI and MNLI, except for QQP, which can be further improved by asymmetric MTL techniques (Lee et al., 2016). To investigate how TARS helps transfer knowledge across tasks, Figure 2a and 2b contrast the feature representation of shared space in ASP and TARS, in two- and three-source settings respectively. First, for two-sources, ASP and TARS are comparable, capturing the distribution of two tasks that are nearly identical, which is desirable for transfer learning. Second, for three sources, the shared space of ASP shows two quite distinct distributions (task-dependent), while TARS keeps two distributions comparable (and task-invariant). Zero-shot Learning Lastly, in Table 2, we test zero-shot learning with two target tasks, CNLI and LCQMC, excluding their own training data (except for the first row single task). As ASP requires target task labels to train its private encoders, we compare TARS only with AFS and MoE, where TARS shows the best performance in MTL. As shown in Figure 3, we observe that when TARS covers sentences in CNLI and LCQMC, using its gating network that identifies that the unknown target tasks are the most similar to SNLI and QQP, respectively: Specifically, highest weights are assigned to these two, but other source tasks also contribute, with non-zero weights. (a) Shared space for two-source (b) Shared space for three-source Figure 2: PCA visualization. Blue and red indicate the shared features of SNLI and QQP, respectively, using ASP (left) and TARS (right). 0.0 0.2 0.4 0.6 0.8 1.0 LCQMC CNLI SNLI MNLI QQP Figure 3: Gating weights in zero-shot learning. 5.3 Experimental Result II: CLL Table 3 shows our results on 14 XNLI languages. Following (Conneau et al., 2018), we divide the models into following three categories: 1) Translate train, where the English NLI training set is translated into each XNLI language and train a language-specific NLI classifier for each language; 2) Translate test, where all dev and test set of XNLI is translated to English and apply English NLI classifier; and 3) Zero-shot Learning, where English classifier is directly applied to the target language without any translation. We also report the results of XNLI baselines (Conneau et al., 2018), a supervised cross-lingual MTL model that combines the Ladv loss using pseudoparallel data (Liu et al., 2017), the multilingual BERT (Devlin et al., 2018), and the recent work of (Artetxe and Schwenk, 2018). First, in Table 3, we can see that BiLSTM model (Conneau et al., 2018), in Translate test, appears 1566 en →xx fr es de el bg ru tr ar vi th zh hi sw ur Translate train, each NLI models for each language BiLSTM (Conneau et al., 2018) 68.3 68.8 66.5 66.4 67.4 66.5 64.5 65.8 66.0 62.8 67.0 62.1 58.2 56.6 BiLSTM+MTL (Liu et al., 2017) 66.0 68.7 67.3 67.4 68.2 64.8 65.3 65.1 66.1 59.3 66.2 54.2 60.0 58.0 CASE (w/o selective) 70.4 70.3 70.2 69.2 70.0 69.6 69.4 68.8 69.3 67.4 70.9 67.4 67.9 66.8 CASE (w selective) 71.1 71.2 70.0 70.3 69.9 69.8 70.0 70.1 70.5 68.9 71.3 68.7 67.7 67.5 Multilingual BERT (Devlin et al., 2018) 77.3* 75.2* 70.5* 74.2* 61.7* Multilingual BERT on CASE (w selective) 78.7 78.2 76.4 76.7 75.8 75.5 73.3 73.7 74.2 72.3 74.3 72.2 71.6 71.3 Translate test, one English NLI model for all languages BiLSTM (Conneau et al., 2018) 70.4 70.7 68.7 69.1 70.4 67.8 66.3 66.8 66.5 64.4 68.3 64.2 61.8 59.3 Multilingual BERT (Devlin et al., 2018) 74.9* 74.4* 70.4* 70.1* 62.1* Zero-Shot Learning, one NLI model for all languages BiLSTM (Conneau et al., 2018) 67.7 68.7 67.7 68.9 67.9 65.4 64.2 64.8 66.4 64.1 65.8 64.1 55.7 58.4 Multilingual BERT (Devlin et al., 2018) 74.3* 70.5* 62.1* 63.8* 58.3* (Artetxe and Schwenk, 2018) 71.9 72.9 72.6 73.1 74.2 71.5 69.7 71.4 72.0 69.2 71.4 65.5 62.2 61.0 Table 3: Accuracy over 14 XNLI languages (test set accuracy). We report results for translation baselines, multitask learning baselines and zero-shot baselines. Overall best results are in bold, and the best in each group is underlined. All results * from its Github project https://github.com/google-research/bert/blob/ master/multilingual.md. to perform consistently better than Translate train for all languages, which means a single English model works better than training each target model with translated data. In contrast, Multilingual BERT (Devlin et al., 2018) achieves best results on Translate train, outperforming most languages, suggesting the generalization of BERT across languages significantly better than BiLSTM model. Meanwhile, CASE, significantly outperforms the BiLSTM and BiLSTM+MTL models in Translate train for all languages, and even outperforms BiLSTM in Translate test. Compared to the best performing MTL baseline, CASE achieves an improvement of 1.7% and 9.5% in Bulgarian (bg) and Urdu (ur) languages respectively. From these results, we observe that: 1) the improvements on low-resource language (e.g., Swahili and Urdu) are more substantial than those on other languages; 2) the selective refinement strategy consistently contributes to the performance improvement. These results show that CASE, by incorporating pseudo-adversarial example as an additional resource, contributes to the robustness and the generalization of the model. Lastly, we show that CASE with multilingual BERT model achieves the state-of-the-art, and even significantly outperforms the supervised approach of (Artetxe and Schwenk, 2018) enjoying an unfair advantage of extremely large amounts of parallel sentences. These results show that CASE, with the help of strong baselines, gets a significant boost in performance, particularly for Swahili and Urdu that are low-resource languages, achieving the improvement of 9.4% and 10.3% respectively. Robustness Analysis In order to verify whether CASE is robust, inspired by (Goodfellow et al., 2015), we test if models keep its prediction, even after changes to the sentence, as long as the meaning remains unchanged. For example, the given sentence can be paraphrased by changing some words with their synonyms, and the models should give the same answer to the paraphrase. Meanwhile, existing models, especially those overfitted to surface forms, are sensitive to such “semantic-preserving” perturbations. As human annotation for such perturbations is expensive, an automated approach (Alzantot et al., 2018) was studied for English, to generate semanticpreserving adversaries that fool well-trained sentiment analysis and NLI models with success rates of 97% and 70%, respectively. In our problem setting of XNLI, we need such a generator (or generated resources) for each language. For which, we identify three research questions: • (RQ1) How hard is it to build a generator for a new language? • (RQ2) Are the observations consistent? • (RQ3) Does our model improve robustness? Specifically, in this paper we focus on Chinese, as we could hire native speaking volunteers to validate whether automatically generated perturbations indeed preserve semantics. First, for RQ1, we leverage Chinese synonyms and antonyms to build counter fitting vectors as (Mrkˇsi´c et al., 2016) to ensure the selected words are synonyms. Then, we slightly change 1567 Original Text Prediction: Contradiction (Confidence = 97%) Premise: 能帮助我的女孩在小镇的另一边。 Hypothesis: 没 没 没有 有 有人能帮助我。 Adversarial Text Prediction: Entailment (Confidence = 59%) Premise: 能帮助我的女孩在小镇的另一边。 Hypothesis: 并 并 并没 没 没有 有 有人能帮助我。 Table 4: Example of generated adversarial example for chinese natural language inference task. (Alzantot et al., 2018)3 to automatically generate Chinese perturbations for NLI task. Following the convention of (Alzantot et al., 2018), for NLI problem, we only add perturbation to the hypothesis, excluding premise, and aim to divert the prediction result from entailment to contradiction, and vice versa. Table 4 is an example of generated adversarial example. For RQ2, we validate the automatically generated perturbations by native speaking volunteers. We show volunteers 500 samples to label whether it is contradiction, neutral or entailment. 84 percent of the responses matched the original ground truth. Second, we sample 500 samples, with each sample including the original sentence and the corresponding adversarial example. Volunteers were asked to judge the similarity of each pair on a scale from 1 (very different) to 4 (very similar). The average rating is 2.12, which shows the performance of our implementation for Chinese perturbation is also competitive. Lastly, for RQ3, we show the attack success rates over generated adversarial example in Table 5. For comparison, we include the single task and MTL baselines. As shown in the Table 5, CASEs are able to achieve higher defense rate (or lower success rate) in performance of 36.6%, while baselines obtained 15.7% and 21.4% respectively, which demonstrates incorporating pseudoadversarial example is indeed helpful to the robustness of the model. Model % Success BiLSTM 0.843 BiLSTM+MTL 0.786 CASE (w/o selective) 0.657 CASE (w selective) 0.634 Table 5: Attack success rates over Chinese adversarial example for the text classification task. 3https://github.com/nesl/nlp adversarial examples 6 Related Work Transfer Learning: Transfer learning enables effective knowledge transfer from the source to the target task. Early works mainly focused on the shared representation methods (Liu et al., 2017; Tong et al., 2018; Lin et al., 2018), using a single shared encoder between all tasks while keeping several task-dependent output layers. However, the sparseness of the shared space, when shared by K tasks, was observed (Sachan and Neubig, 2018). In this paper, we study a soft-coding approach to overcome sparsity, leading to performance gains in MTL and CLL tasks. Closely related work is MMoE (Ma et al., 2018), which explicitly learns the task relationship by modeling a gating network for each task. Such work does not consider which combination of networks to use for a new task, while we differentiate by deciding such combination for a new task based on its similarity to the source tasks. Adversarial Example: Despite the success of deep neural networks, neural models are still brittle to adversarial examples (Goodfellow et al., 2015). Recently, adversarial examples are widely incorporated into training to improve the generalization and robustness of the model using back-translated paraphrases (Iyyer et al., 2018), machine-generated rules (Ribeiro et al., 2018), black-box (Alzantot et al., 2018) and whitebox (Ebrahimi et al., 2018). Inspired, we study pseudo-adversarial example in latent space to improve the robustness of the model. To the best of our knowledge, we are the first proposing pseudoadversarial training in latent space for transfer learning. 7 Conclusion In this paper, we study the limitations of hardparameter sharing in sparse transfer learning. We propose soft-code approaches to avoid the sparseness observed in MTL and CLL. We have demonstrated the effectiveness and flexibility of our softcode approaches in extensive evaluations over MTL and CLL scenarios. Acknowledgments This work is supported by Microsoft Research Asia and IITP grant funded by the Korean government (MSIT, 2017-0-01779, XAI). Hwang is a corresponding author. 1568 References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In EMNLP. Mikel Artetxe and Holger Schwenk. 2018. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. In arXiv preprint arXiv:1812.10464. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In EMNLP. Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. In EMNLP. Korn´el Csernai, Shankar Iyer, and Nikhil Dandekar. 2017. Quora question pairs. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. Hotflip: White-box adversarial examples for text classification. In ACL. Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In ICML. Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In ICLR. Jiang Guo, Darsh Shah, and Regina Barzilay. 2018. Multi-source domain adaptation with mixture of experts. In EMNLP. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In NAACL-HLT. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. Giwoong Lee, Eunho Yang, and Sung Hwang. 2016. Asymmetric multi-task learning based on task relatedness and loss. In ICML. Ying Lin, Shengqi Yang, Veselin Stoyanov, and Heng Ji. 2018. A multi-lingual multi-task architecture for low-resource sequence labeling. In ACL. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classification. In ACL. Xin Liu, Qingcai Chen, Chong Deng, Jing Chen, Dongfang Li, and Huajun Zeng. 2018. LCQMC:A Largescale Chinese Question Matching Corpus. In COLING. Jiaqi Ma, Zhe Zhao, Xinyang Yi, Jilin Chen, Lichan Hong, and Ed H. Chi. 2018. Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’18. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Blaise Thomson, Milica Gaˇsi´c, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In NAACL-HLT. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging nlp models. In ACL. Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. 2019. Latent multi-task architecture learning. In AAAI. Devendra Singh Sachan and Graham Neubig. 2018. Parameter sharing methods for multilingual selfattentional translation models. In WMT. Victor Sanh, Thomas Wolf, and Sebastian Ruder. 2019. A hierarchical multi-task approach for learning embeddings from semantic tasks. In AAAI. Holger Schwenk and Matthijs Douze. 2017. Learning joint multilingual sentence representations with neural machine translation. arXiv preprint arXiv:1704.04154. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint. Xiaowei Tong, Zhenxin Fu, Mingyue Shang, Dongyan Zhao, and Rui Yan. 2018. One “ruler” for all languages: Multi-lingual dialogue evaluation with adversarial multi-task learning. In IJCAI. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL-HLT. Jinyoung Yeo, Geungyu Wang, Hyunsouk Cho, Seungtaek Choi, and Seung-won Hwang. 2018. Machinetranslated knowledge transfer for commonsense causal reasoning. In AAAI.
2019
151
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1569–1576 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1569 Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization Paul Pu Liang♣∗, Zhun Liu♢∗, Yao-Hung Hubert Tsai♣ Qibin Zhao♡, Ruslan Salakhutdinov♣, Louis-Philippe Morency♢ ♣Machine Learning Department, Carnegie Mellon University, USA ♢Language Technologies Institute, Carnegie Mellon University, USA ♡Tensor Learning Unit, RIKEN Center for Artificial Intelligence Project, Japan {pliang,zhunl,yaohungt,rsalakhu,morency}@cs.cmu.edu [email protected] Abstract There has been an increased interest in multimodal language processing including multimodal dialog, question answering, sentiment analysis, and speech recognition. However, naturally occurring multimodal data is often imperfect as a result of imperfect modalities, missing entries or noise corruption. To address these concerns, we present a regularization method based on tensor rank minimization. Our method is based on the observation that high-dimensional multimodal time series data often exhibit correlations across time and modalities which leads to low-rank tensor representations. However, the presence of noise or incomplete values breaks these correlations and results in tensor representations of higher rank. We design a model to learn such tensor representations and effectively regularize their rank. Experiments on multimodal language data show that our model achieves good results across various levels of imperfection. 1 Introduction Analyzing multimodal language sequences spans various fields including multimodal dialog (Das et al., 2017; Rudnicky, 2005), question answering (Antol et al., 2015; Tapaswi et al., 2015; Das et al., 2018), sentiment analysis (Morency et al., 2011), and speech recognition (Palaskar et al., 2018). Generally, these multimodal sequences contain heterogeneous sources of information across the language, visual and acoustic modalities. For example, when instructing robots, these machines have to comprehend our verbal instructions and interpret our nonverbal behaviors while grounding these inputs in their visual sensors (Schmerling et al., 2017; Iba et al., 2005). Likewise, comprehending human intentions requires integrating human language, speech, ∗first two authors contributed equally clean multimodal data <latexit sha1_base64="h1pzHCMR0RKSBQVEr4zlmGh8c=">ACBXicbVA9SwNBEN3z2/gVtdRiMQhW4U4FLYM2lgomCskR5vY2yeLu3rE7J4YjY1/xcZCE Vv/g53/xk1yhSY+GHi8N8PMvCiVwqLvf3szs3PzC4tLy6WV1bX1jfLmVsMmWG8zhKZmNsILJdC8zoKlPw2NRxUJPlNdHc+9G/ubEi0dfYT3moKtFRzBAJ7XLuy3kD5gzyUFTlUkUKolB0hgQBu1yxa/6I9BpEhSkQgpctstfrThmeIamQRrm4GfYpiDQeE2DEqtzPIU2B10edNRDYrbMB9MaD7TolpJzGuNKR+nsiB2VtX0WuU wH27KQ3FP/zmhl2TsNc6DRDrtl4USeTFBM6jITGwnCGsu8IMCPcrZT1wABDF1zJhRBMvjxNGofV4KjqXx1XamdFHEtkh+yRAxKQE1IjF+S1Akj+SZvJI378l78d69j3HrjFfMbJM/8D5/AB0omPM=</latexit> imperfect multimodal data <latexit sha1_base64="I3wmMCkoG+gn08puQZqeM8eRbQ=">ACXicbVC7SgNBFJ31GeMramkzGASrsKuClkEbywhGhSEu7N34+DM7jJzVwxLWht/xcZCEVv/wM 6/cfIo1Hhg4HDOvdw5J8yUtOT7X97M7Nz8wmJpqby8srq2XtnYvLRpbgQ2RapScx2CRSUTbJIkhdeZQdChwqvw9nToX92hsTJNLqifYUdDL5GxFEBO6lZ4m/CeCqkzNDEK4jpXJHUageIREAy6lapf80fg0ySYkCqboNGtfLajVOQaExIKrG0FfkadAgxJoXBQbucWMxC30MOWowlotJ1ilGTAd50S8Tg17iXER+rPjQK0tX0dukNdGP/ekPxP6+VU 3zcKWS5YSJGB+Kc8Up5cNaeCSNS6/6joAw0v2VixswIMiV3YlBH8jT5PL/VpwUPD6v1k0kdJbNdtgeC9gRq7Mz1mBNJtgDe2Iv7NV79J69N+9PDrjTXa2C94H9+GKZrX</latexit> tensor rank regularization <latexit sha1_base64="qWvflx4gMCU5rXKIOGbeflghyQ=">ACDHicbVC7TgJBFJ3F+ILtbSZSEysyK6aEm0scREHgkQMjvchQmzs5uZu0bc8AE2 /oqNhcbY+gF2/o3DQqHgrU7OTf3nuPHUh03W8nt7S8srqWXy9sbG5t7xR39+omSjSHGo9kpJs+MyCFghoKlNCMNbDQl9Dwh1cTvXEH2ohI3eIohk7I+koEgjO0VLdYaiPcox+kCMpEmqmhlRDP5FMi4fMNLYut+xmQxeBNwMlMptqt/jV7kU8CUEhl8yYlufG2EmZRsEljAvtxEDM+JD1oWhYiGYTpqFGdMjy/RoYH8 JIoU0Y39vpCw0ZhT61hkyHJh5bUL+p7USDC46qVBxYrPy6aEgkRQjOmG9oQGjnJkAeNa2F8pHzDNONr+CrYEbz7yIqiflL3TsntzVqpczurIkwNySI6JR85JhVyTKqkRTh7JM3klb86T8+K8Ox9Ta86Z7eyTP+N8/gBHLZxj</latexit> clean entries <latexit sha1_base64="wuMzvz1f+N4dxLXkrZwJSF97FMk=">AB/XicbVDLSgMxFM3UV62v+ti5CRbBVZlRQZdFNy4r2Ae0Q8mkt21oJjMkd8Q6FH/FjQtF3Pof7vwb03YW2nogcDjnvnKCWA qDrvt5JaWV1bX8uFjc2t7Z3i7l7dRInmUORjHQzYAakUFBDgRKasQYWBhIawfB64jfuQRsRqTscxeCHrK9ET3CGVuoUD9oID5hyCUxRUKgFmHGnWHL7hR0kXgZKZEM1U7xq92NeBLaAVwyY1qeG6OfMo3CTh4X2omBmPEh60PLUsVCMH46vX5Mj63Spb1I26eQTtXfHSkLjRmFga0MGQ7MvDcR/NaCfYu/VSoOEFQfLaol0iKEZ1EQbtCA0c5soRxLeytlA+YZhxtYAUbgjf/5UVSPy17Z2X39rxUu criyJNDckROiEcuSIXckCqpEU4eyTN5JW/Ok/PivDsfs9Kck/Xskz9wPn8AKAWVqw=</latexit> imperfect entries <latexit sha1_base64="uEy9xlvD1dXmU2/rxG3w4dCph+Q=">ACAXicbVDLSgMxFM3UV62vUTeCm2ARXJUZFXRZdOygn1AO5RMeqcNzTxI7ohlqBt/xY0LRdz6F+78G9N2Ftp6IHA4594k5/iJFBod59 sqLC2vrK4V10sbm1vbO/buXkPHqeJQ57GMVctnGqSIoI4CJbQSBSz0JT94fXEb96D0iKO7nCUgBeyfiQCwRkaqWsfdBAeMBNhAioAjhQiVAL0uGuXnYozBV0kbk7KJEeta391ejFPQ3MBl0zrtusk6GVMoeASxqVOqiFhfMj60DY0YiFoL5smGNjo/RoECtzIqRT9fdGxkKtR6FvJkOGAz3vTcT/vHaKwaWXiShJESI+eyhIJcWYTuqgPaFMajkyhHElzF8pHzDFOJrSqYEdz7yImcVtyzinN7Xq5e5XUySE5IifEJRekSm 5IjdQJ4/kmbySN+vJerHerY/ZaMHKd/bJH1ifP4Lml48=</latexit> Figure 1: Clean multimodal time series data (in shades of green) exhibits correlations across time and across modalities, leading to redundancy in low rank tensor representations. On the other hand, the presence of imperfect entries (in gray, blue, and red) breaks these correlations and leads to higher rank tensors. In these scenarios, we use tensor rank regularization to learn tensors that more accurately represent the true correlations and latent structures in multimodal data. facial behaviors, and body postures (Mihalcea, 2012; Rossiter, 2011). However, as much as more modalities are required for improved performance, we now face a challenge of imperfect data where data might be 1) incomplete due to mismatched modalities or sensor failure, or 2) corrupted with random or structured noise. As a result, an important research question involves learning robust representations from imperfect multimodal data. Recent research in both unimodal and multimodal learning has investigated the use of tensors for representation learning (Anandkumar et al., 2014). Given representations h1,...,hM from M modalities, the order-M outer product tensor T = h1 ⊗h2 ⊗... ⊗hM is a natural representation for all possible interactions between the modality dimensions (Liu et al., 2018). In this paper, we propose a model called the Temporal Tensor Fusion Network (T2FN) that builds tensor representations from multimodal time series data. T2FN 1570 learns a tensor representation that captures multimodal interactions across time. A key observation is that clean data exhibits tensors that are lowrank since high-dimensional real-world data is often generated from lower dimensional latent structures (Lakshmanan et al., 2015). Furthermore, clean multimodal time series data exhibits correlations across time and across modalities (Yang et al., 2017; Hidaka and Yu, 2010). This leads to redundancy in these overparametrized tensors which explains their low rank (Figure 1). On the other hand, the presence of noise or incomplete values breaks these natural correlations and leads to higher rank tensor representations. As a result, we can use tensor rank minimization to learn tensors that more accurately represent the true correlations and latent structures in multimodal data, thereby alleviating imperfection in the input. With these insights, we show how to integrate tensor rank minimization as a simple regularizer for training in the presence of imperfect data. As compared to previous work on imperfect data (Sohn et al., 2014; Srivastava and Salakhutdinov, 2014; Pham et al., 2019), our model does not need to know which of the entries or modalities are imperfect beforehand. Our model combines the strength of temporal non-linear transformations of multimodal data with a simple regularization technique on tensor structures. We perform experiments on multimodal video data consisting of humans expressing their opinions using a combination of language and nonverbal behaviors. Our results back up our intuitions that imperfect data increases tensor rank. Finally, we show that our model achieves good results across various levels of imperfection. 2 Related Work Tensor Methods: Tensor representations have been used for learning discriminative representations in unimodal and multimodal tasks. Tensors are powerful because they can capture important higher order interactions across time, feature dimensions, and multiple modalities (Kossaifiet al., 2017). For unimodal tasks, tensors have been used for part-of-speech tagging (Srikumar and Manning, 2014), dependency parsing (Lei et al., 2014), word segmentation (Pei et al., 2014), question answering (Qiu and Huang, 2015), and machine translation (Setiawan et al., 2015). For multimodal tasks, Huang et al. (2017) used tensor products between images and text features for image captioning. A similar approach was proposed to learn representations across text, visual, and acoustic features to infer speaker sentiment (Liu et al., 2018; Zadeh et al., 2017). Other applications include multimodal machine translation (Delbrouck and Dupont, 2017), audio-visual speech recognition (Zhang et al., 2017), and video semantic analysis (Wu et al., 2009; Gao et al., 2009). Imperfect Data: In order to account for imperfect data, several works have proposed generative approaches for multimodal data (Sohn et al., 2014; Srivastava and Salakhutdinov, 2014). Recently, neural models such as cascaded residual autoencoders (Tran et al., 2017), deep adversarial learning (Cai et al., 2018), or translation-based learning (Pham et al., 2019) have also been proposed. However, these methods often require knowing which of the entries or modalities are imperfect beforehand. While there has been some work on using low-rank tensor representations for imperfect data (Chang et al., 2017; Fan et al., 2017; Chen et al., 2017; Long et al., 2018; Nimishakavi et al., 2018), our approach is the first to integrate rank minimization with neural networks for multimodal language data, thereby combining the strength of non-linear transformations with the mathematical foundations of tensor structures. 3 Proposed Method In this section, we present our method for learning representations from imperfect human language across the language, visual, and acoustic modalities. In §3.1, we discuss some background on tensor ranks. We outline our method for learning tensor representations via a model called Temporal Tensor Fusion Network (T2FN) in §3.2. In §3.3, we investigate the relationship between tensor rank and imperfect data. Finally, in §3.4, we show how to regularize our model using tensor rank minimization. We use lowercase letters x ∈R to denote scalars, boldface lowercase letters x ∈Rd to denote vectors, and boldface capital letters X ∈ Rd1×d2 to denote matrices. Tensors, which we denote by calligraphic letters X, are generalizations of matrices to multidimensional arrays. An orderM tensor has M dimensions, X ∈Rd1×...×dM . We use ⊗to denote outer product between vectors. 3.1 Background: Tensor Rank The rank of a tensor measures how many vectors are required to reconstruct the tensor. Simple tensors that can be represented as outer products of 1571 h1 ℓ h2 ℓ hT ℓ h1 a h2 a hT a . . . . . . h3 ℓ h3 a . . . h1 v h2 v h3 v hT v . . . [x1 ℓ, . . . xT ℓ] [x1 a, . . . xT a] [x1 v, . . . xT v] y Mt = ht ` ⌦ht v ⌦ht a <latexit sha1_base64="c6ZUwHfEASXn8HdFB5sLByT5Xx8=">ACPnicdVC7Sg NBFJ2Nrxhfq5Y2g0GwCrsqaCMEbWyECOYB2RhmJ7PJkNkHM3cDYdkvs/Eb7CxtLBSxtXR2k8IkemDgzDn3cu89biS4Ast6MQpLyura8X10sbm1vaOubvXUGEsKavTUISy5 RLFBA9YHTgI1okI74rWNMdXmd+c8Sk4mFwD+OIdXzSD7jHKQEtdc264xMYUCKS2/QB8CXO/6XDNJu4jAhMtUJgftMzXij/wyija5ZtipWDrxI7CkpoylqXfPZ6YU09lkAV BCl2rYVQSchEjgVLC05sWIRoUPSZ21NA6KndpL8/BQfaWHvVDqFwDO1d8dCfGVGvursz2VPNeJv7ltWPwLjoJD6IYWEAng7xYAhxliXuckoiLEmhEqud8V0QCShoBMv 6RDs+ZMXSeOkYp9WrLuzcvVqGkcRHaBDdIxsdI6q6AbVUB1R9Ihe0Tv6MJ6MN+PT+JqUFoxpz6agfH9A0UsZg=</latexit> M = T X t=1 Mt <latexit sha1_base64="a1jY5th1Xy+fYanMukUdv4ksWM4=">ACEXicbVC7SgNBFJ2N rxhfq5Y2g0FIFXZV0CYQtLERIuQF2U2YnUySIbMPZu4KYckv2PgrNhaK2NrZ+TfOJlvExAMXDufcy73eJHgCizrx8itrW9sbuW3Czu7e/sH5uFRU4WxpKxBQxHKtkcUEzxgDeAgWDuSjPie YC1vfJv6rUcmFQ+DOkwi5vpkGPABpwS01DNLjk9gRIlI7qe4gh0V+70EKva0W8cLVhd6ZtEqWzPgVWJnpIgy1Hrmt9MPaeyzAKgSnVsKwI3IRI4FWxacGLFIkLHZMg6mgbEZ8pNZh9N8ZlW +ngQSl0B4Jm6OJEQX6mJ7+nO9Ei17KXif14nhsG1m/AgioEFdL5oEAsMIU7jwX0uGQUx0YRQyfWtmI6IJBR0iAUdgr38ipnpfti7L1cFms3mRx5NEJOkUlZKMrVEV3qIYaiKIn9ILe0Lv xbLwaH8bnvDVnZDPH6A+Mr18w0J0+</latexit> M <latexit sha1_base64="uZvzERQPKMR9NiLGAcS/T5qG+0="> AB8nicbVBNS8NAFHypX7V+VT16WSyCp5KoMeiFy9CBWsLaSib7bZdutmE3RehP4MLx4U8eqv8ea/cdPmoK0DC8PMe+y8CRMpDLru t1NaWV1b3yhvVra2d3b3qvsHjyZONeMtFstYd0JquBSKt1Cg5J1EcxqFkrfD8U3ut5+4NiJWDzhJeBDRoRIDwShaye9GFEeMyuxu2qv W3Lo7A1kmXkFqUKDZq351+zFLI6QSWqM7kJBhnVKJjk0o3NTyhbEyH3LdU0YibIJtFnpITq/TJINb2KSQz9fdGRiNjJlFoJ/OIZtH Lxf8P8XBVZAJlaTIFZt/NEglwZjk95O+0JyhnFhCmRY2K2EjqilD21LFluAtnrxMHs/q3ndvb+oNa6LOspwBMdwCh5cQgNuoQktYBD DM7zCm4POi/PufMxHS06xcwh/4Hz+AIM0kWU=</latexit> MT <latexit sha1_base64="8QgwEIjtHbIpE/ny7x/TC5TA1HM=">AB9HicbVDLS gMxFL3xWeur6tJNsAiuyowKuiy6cSNU6AvasWTSTBuayYxJplCGfocbF4q49WPc+Tdm2lo64HA4Zx7uSfHjwXxnG+0crq2vrGZmGruL2zu7dfOjhs6ihRlDVoJCLV9o lmgkvWMNwI1o4VI6EvWMsf3WZ+a8yU5pGsm0nMvJAMJA84JcZKXjckZkiJSO+nj/VeqexUnBnwMnFzUoYctV7pq9uPaBIyagWndcJzZeSpThVLBpsZtoFhM6IgPWsVS SkGkvnYWe4lOr9HEQKfukwTP190ZKQq0noW8ns5B60cvE/7xOYoJrL+UyTgyTdH4oSAQ2Ec4awH2uGDViYgmhitusmA6JItTYnoq2BHfxy8ukeV5xLyrOw2W5epPXUYBj OIEzcOEKqnAHNWgAhSd4hld4Q2P0gt7Rx3x0BeU7R/AH6PMH5FGSKw=</latexit> M1 <latexit sha1_base64="kEkwVroTad49Yl6qxE3XJxZ+nes=">AB9HicbVDLS gMxFL2pr1pfVZdugkVwVWZU0GXRjRuhgn1AO5ZMmlDM5kxyRTK0O9w40IRt36MO/GTDsLbT0QOJxzL/fk+LHg2jONyqsrK6tbxQ3S1vbO7t75f2Dpo4SRVmDRiJSbZ 9oJrhkDcONYO1YMRL6grX80U3mt8ZMaR7JBzOJmReSgeQBp8RYyeuGxAwpEend9NHtlStO1ZkBLxM3JxXIUe+Vv7r9iCYhk4YKonXHdWLjpUQZTgWblrqJZjGhIzJgHUs lCZn20lnoKT6xSh8HkbJPGjxTf2+kJNR6Evp2MgupF71M/M/rJCa48lIu48QwSeHgkRgE+GsAdznilEjJpYQqrjNiumQKEKN7alkS3AXv7xMmdV97zq3F9Uatd5HU4 gmM4BRcuoQa3UIcGUHiCZ3iFNzRGL+gdfcxHCyjfOYQ/QJ8/r0WSCA=</latexit> M2 <latexit sha1_base64="zcVGz7+t4ZIbw5VqiTiAzOeOQvE=">AB9HicbVD LSgMxFL3js9ZX1aWbYBFclZkq6Loxo1QwT6gHUsmzbShmWRMoUy9DvcuFDErR/jzr8x085CWw8EDufcyz05QcyZNq7aysrq1vbBa2its7u3v7pYPDpaJIrRBJ JeqHWBNORO0YZjhtB0riqOA01Ywusn81pgqzaR4MJOY+hEeCBYygo2V/G6EzZBgnt5NH6u9UtmtuDOgZeLlpAw56r3SV7cvSRJRYQjHWnc8NzZ+ipVhNpsZtoGmM ywgPasVTgiGo/nYWeolOr9FEolX3CoJn6eyPFkdaTKLCTWUi96GXif14nMeGVnzIRJ4YKMj8UJhwZibIGUJ8pSgyfWIKJYjYrIkOsMDG2p6ItwVv8jJpVivecW9v yjXrvM6CnAMJ3AGHlxCDW6hDg0g8ATP8Apvzth5cd6dj/noipPvHMEfOJ8/sMmSCQ=</latexit> M3 <latexit sha1_base64="UTWzPLBL6vIDwYDXR4ODZOrB8sY=">AB9HicbVDLS gMxFL1TX7W+qi7dBIvgqsyoMuiGzdCBfuAdiyZNOGZpIxyRTK0O9w40IRt36MO/GTDsLbT0QOJxzL/fkBDFn2rjut1NYWV1b3yhulra2d3b3yvsHTS0TRWiDSC5VO8 CaciZowzDaTtWFEcBp61gdJP5rTFVmknxYCYx9SM8ECxkBsr+d0ImyHBPL2bPp73yhW36s6AlomXkwrkqPfKX92+JElEhSEca93x3Nj4KVaGEU6npW6iaYzJCA9ox1K BI6r9dBZ6ik6s0kehVPYJg2bq740UR1pPosBOZiH1opeJ/3mdxIRXfspEnBgqyPxQmHBkJMoaQH2mKDF8YgkmitmsiAyxwsTYnkq2BG/xy8ukeVb1zqvu/UWldp3XUYQj OIZT8OASanALdWgAgSd4hld4c8bOi/PufMxHC06+cwh/4Hz+ALJNkgo=</latexit> language LSTM <latexit sha1_base64="4wOX6NYqLG9nl+8bFLf3vGko/o=">AB /XicbVDLSsNAFJ3UV62v+Ni5GSyCq5KoMuiGxcKFfuCNpTJdNIOnUzCzI1YQ/FX3LhQxK3/4c6/cdpmoa0HLhzOuZd7/FjwTU4zreVW1hcWl 7JrxbW1jc2t+ztnbqOEkVZjUYiUk2faCa4ZDXgIFgzVoyEvmANf3A59hv3TGkeySoMY+aFpCd5wCkBI3XsvTawB0gFkb2E9Bi+vqvejDp20Sk5E +B54makiDJUOvZXuxvRJGQSqCBat1wnBi8lCjgVbFRoJ5rFhA7MhpahkoRMe+nk+hE+NEoXB5EyJQFP1N8TKQm1Hoa+6QwJ9PWsNxb/81oJBOd eymWcAJN0uihIBIYIj6PAXa4YBTE0hFDFza2Y9okiFExgBROCO/vyPKkfl9yTknN7WixfZHk0T46QEfIRWeojK5QBdUQRY/oGb2iN+vJerHerY 9pa87KZnbRH1ifP3B7lTI=</latexit> visual LSTM <latexit sha1_base64="PkdTAP5iIoRE3tOjUNsjVhu5mek=">AB+3i cbVBNS8NAEN3Ur1q/Yj16WSyCp5KoMeiFw8KFfsFbSib7bZdutmE3UlpCfkrXjwo4tU/4s1/47bNQVsfDzem2Fmnh8JrsFxvq3c2vrG5lZ+u7Czu7d /YB8WGzqMFWV1GopQtXyimeCS1YGDYK1IMRL4gjX90e3Mb46Z0jyUNZhGzAvIQPI+pwSM1LWLHWATSMZcx0Tg+6faQ9q1S07ZmQOvEjcjJZSh2rW/Or2 QxgGTQAXRu06EXgJUcCpYGmhE2sWEToiA9Y2VJKAaS+Z357iU6P0cD9UpiTgufp7IiGB1tPAN50BgaFe9mbif147hv61l3AZxcAkXSzqxwJDiGdB4B5X jIKYGkKo4uZWTIdEQomroIJwV1+eZU0zsvuRdl5vCxVbrI48ugYnaAz5KIrVEF3qIrqiKIJekav6M1KrRfr3fpYtOasbOYI/YH1+QMNGpRu</latexi t> acoustic LSTM <latexit sha1_base64="x1Fk1eUX+Il9eXi9bofgql+EJEs=">AB/Xi cbVDLSgMxFM3UV62v8bFzEyCqzKjgi6LblwoVOwL2qFk0kwbmskMyR2xDsVfceNCEbf+hzv/xkzbhbYeuHA4597k3uPHgmtwnG8rt7C4tLySXy2srW9 sbtnbO3UdJYqyGo1EpJo+0UxwyWrAQbBmrBgJfcEa/uAy8xv3TGkeySoMY+aFpCd5wCkBI3XsvTawB0gJjRINnOLru+rNqGMXnZIzBp4n7pQU0RSVjv3 V7kY0CZkEKojWLdeJwUuJMk8KNiq0E81iQgekx1qGShIy7aXj7Uf40ChdHETKlAQ8Vn9PpCTUehj6pjMk0NezXib+57USCM69lMs4ASbp5KMgERginEWB u1wxCmJoCKGKZ+fTPlGEgmsYEJwZ0+eJ/XjkntScm5Pi+WLaRx5tI8O0BFy0RkqoytUQTVE0SN6Rq/ozXqyXqx362PSmrOmM7voD6zPH5P/lUk=</la texit> Figure 2: The Temporal Tensor Fusion Network (T2FN) creates a tensor M from temporal data. The rank of M increases with imperfection in data so we regularize our model by minimizing an upper bound on the rank of M. vectors have lower rank, while complex tensors have higher rank. To be more precise, we define the rank of a tensor using Canonical Polyadic (CP)-decomposition (Carroll and Chang, 1970). For an order-M tensor X ∈Rd1×...×dM , there exists an exact decomposition into vectors w: X = r ∑ i=1 M ⊗ m=1 wi m. (1) The minimal r for exact decomposition is called the rank of the tensor. The vectors {{wi m}M m=1}r i=1 are called the rank r decomposition factors of X. 3.2 Multimodal Tensor Representations Our model for creating tensor representations is called the Temporal Tensor Fusion Network (T2FN), which extends the model in Zadeh et al. (2017) to include a temporal component. We show that T2FN increases the capacity of TFN to capture high-rank tensor representations, which itself leads to improved prediction performance. More importantly, our knowledge about tensor rank properties allows us to regularize our model effectively for imperfect data. We begin with time series data from the language, visual and acoustic modalities, denoted as [x1 ℓ,...,xT ℓ], [x1 v,...,xT v ], and [x1 a,...,xT a ] respectively. We first use Long Short-term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) to encode the temporal information from each modality, resulting in a sequence of hidden representations [h1 ℓ,...,hT ℓ], [h1 v,...,hT v ], and [h1 a,...,hT a ]. Similar to prior work which found tensor representations to capture higher-order interactions from multimodal data (Liu et al., 2018; Zadeh et al., 2017; Fukui et al., 2016), we form tensors via outer products of the individual representations through time (as shown in Figure 2): M = T ∑ t=1 [ht ℓ 1 ] ⊗[ht v 1 ] ⊗[ht a 1 ] (2) where we append a 1 so that unimodal, bimodal, and trimodal interactions are all captured as described in Zadeh et al. (2017). M is our multimodal representation which can then be used to predict the label y using a fully connected layer. Observe how the construction of M closely resembles equation (1) as the sum of vector outer products. As compared to TFN which uses a single outer product to obtain a multimodal tensor of rank one, T2FN creates a tensor of high rank (upper bounded by T). As a result, the notion of rank naturally emerges when we reason about the properties of M. 3.3 How Does Imperfection Affect Rank? We first state several observations about the rank of multimodal representation M: 1) rnoisy: The rank of M is maximized when data entries are sampled from i.i.d. noise (e.g. Gaussian distributions). This is because this setting leads to no redundancy at all between the feature dimensions across time steps. 2) rclean < rnoisy: Clean real-world data is often generated from lower dimensional latent structures (Lakshmanan et al., 2015). Furthermore, multimodal time series data exhibits correlations across time and across modalities (Yang et al., 2017; Hidaka and Yu, 2010). This redundancy leads to low-rank tensor representations. 3) rclean < rimperfect < rnoisy: If the data is imperfect, the presence of noise or incomplete values breaks these natural correlations and leads to higher rank tensor representations. These intuitions are also backed up by several experimental results which are presented in §4.2. 3.4 Tensor Rank Regularization Given our intuitions above, it would then seem natural to augment the discriminative objective function with a term to minimize the rank of M. 1572 In practice, the rank of an order-M tensor is computed using the nuclear norm ∥X∥∗which is defined as (Friedland and Lim, 2014), ∥X∥∗= inf { r ∑ i=1 ∣λi∣∶X = r ∑ i=1 λi ( M ⊗ m=1 wi m), ∥wi m∥= 1, r ∈N}. (3) When M = 2, this reduces to the matrix nuclear norm (sum of singular values). However, computing the rank of a tensor or its nuclear norm is NP-hard for tensors of order ≥3 (Friedland and Lim, 2014). Fortunately, there exist efficiently computable upper bounds on the nuclear norm and minimizing these upper bounds would also minimize the nuclear norm ∥M∥∗. We choose the upper bound as presented in Hu (2014), which upper bounds the nuclear norm with the tensor Frobenius norm scaled by the tensor dimensions: ∥M∥∗≤ ¿ Á Á À ∏M i=1 di max{d1,...,dM}∥M∥F , (4) where the Frobenius norm ∥M∥F is defined as the sum of squared entries in M which is easily computable and convex. Since ∥M∥F is easily computable and convex, including this term adds negligible computational cost to the model. We will use this upper bound as a surrogate for the nuclear norm in our objective function. Our objective function is therefore a weighted combination of the prediction loss and the tensor rank regularizer in equation (4). 4 Experiments Our experiments are designed with two research questions in mind: 1) What is the effect of various levels of imperfect data on tensor rank in T2FN? 2) Does T2FN with rank regularization perform well on prediction with imperfect data? We answer these questions in §4.2 and §4.3 respectively. 4.1 Datasets We experiment with real video data consisting of humans expressing their opinions using a combination of language and nonverbal behaviors. We use the CMU-MOSI dataset which contains 2199 videos annotated for sentiment in the range [−3,+3] (Zadeh et al., 2016). CMU-MOSI and related multimodal language datasets have been studied in the NLP community (Gu et al., 2018; Liu et al., 2018; Liang et al., 2018) from fully supervised settings but not from the perspective of supervised learning with imperfect data. We use 52 segments for training, 10 for validation and 31 for testing. GloVe word embeddings (Pennington et al., 2014), Facet (iMotions, 2017), and COVAREP (Degottex et al., 2014) features are extracted for the language, visual and acoustic modalities respectively. Forced alignment is performed using P2FA (Yuan and Liberman, 2008) to align visual and acoustic features to each word, resulting in a multimodal sequence. Our data splits, features, alignment, and preprocessing steps are consistent with prior work on the CMU-MOSI dataset (Liu et al., 2018). 4.2 Rank Analysis We first study the effect of imperfect data on the rank of tensor M. We introduce the following types of noises parametrized by noise level = [0.0,0.1,...,1.0]. Higher noise levels implies more imperfection: 1) clean: no imperfection, 2) random drop: each entry is dropped independently with probability p ∈noise level, and 3) structured drop: independently for each modality, each time step is chosen with probability p ∈noise level. If a time step is chosen, all feature dimensions at that time step are dropped. For all imperfect settings, features are dropped during both training and testing. We would like to show how the tensor ranks vary under different imperfection settings. However, as is mentioned above, determining the exact rank of a tensor is an NP-hard problem (Friedland and Lim, 2014). In order to analyze the effect of imperfections on tensor rank, we perform CP decomposition (equation (5)) on the tensor representations under different rank settings r and measure the reconstruction error ϵ, ϵ = min wim ∥( r ∑ i=1 M ⊗ m=1 wi m) −X∥ F . (5) Given the true rank r∗, ϵ will be high at ranks r < r∗, while ϵ will be approximately zero at ranks r ≥r∗(for example, a rank 3 tensor would display a large reconstruction error with CP decomposition at rank 1, but would show almost zero error with CP decomposition at rank 3). By analyzing the effect of r on ϵ, we are then able to derive a surrogate ˜r to the true rank r∗. Using this approach, we experimented on CMU-MOSI and the results are shown in Figure 3(a). We observe that imperfection leads to an increase in (approximate) tensor rank as measured 1573 1 2 3 4 5 6 7 8 9 1011121314151617181920212223 Decomposition rank 0.0 0.1 0.2 0.3 0.4 0.5 Decomposition loss clean random drop structured drop (a) CP decomposition error of M under random and structured dropping of features. Imperfect data leads to an increase in decomposition error and an increase in (approximate) tensor rank. (b) Sentiment classification accuracy under random drop (i.e. dropping entries randomly with probability p ∈ noise level). T2FN with rank regularization (green) performs well. (c) Sentiment classification accuracy under structured drop (dropping entire time steps randomly with probability p ∈ noise level). T2FN with rank regularization (green) performs well. Figure 3: (a) Effect of imperfect data on tensor rank. (b) and (c): CMU-MOSI test accuracy under imperfect data. by reconstruction error (the graph shifts outwards and to the right), supporting our hypothesis that imperfect data increases tensor rank (§3.3). 4.3 Prediction Results Our next experiment tests the ability of our model to learn robust representations despite data imperfections. We use the tensor M for prediction and report binary classification accuracy on CMU-MOSI test set. We compare to several baselines: Early Fusion (EF)-LSTM, Late Fusion (LF)-LSTM, TFN, and T2FN without rank regularization. These results are shown in Figure 3(b) for random drop and Figure 3(c) for structured drop. T2FN with rank regularization maintains good performance despite imperfections in data. We also observe that our model’s improvement is more significant on random drop settings, which results in a higher tensor rank as compared to structured drop settings (from Figure 3(a)). This supports our hypothesis that our model learns robust representations when imperfections that increase tensor rank are introduced. On the other hand, the existing baselines suffer in the presence of imperfect data. 5 Discussion and Future Work We acknowledge that there are other alternative methods to upper bound the true rank of a tensor (Alexeev et al., 2011; Atkinson and Lloyd, 1980; Ballico, 2014). From a theoretical perspective, there exists a trade-off between the cost of computation and the tightness of approximation. In addition, the tensor rank can (far) exceed the maximum dimension, and a low-rank approximation for tensors may not even exist (de Silva and Lim, 2008). While our tensor rank regularization method seems to work well empirically, there is definitely room for a more thorough theoretical analysis of constructing and regularizing tensor representations for multimodal learning. 6 Conclusion This paper presented a regularization method based on tensor rank minimization. We observe that clean multimodal sequences often exhibit correlations across time and modalities which leads to low-rank tensors, while the presence of imperfect data breaks these correlations and results in tensors of higher rank. We designed a model, the Temporal Tensor Fusion Network, to learn such tensor representations and effectively regularize their rank. Experiments on multimodal language data show that our model achieves good results across various levels of imperfections. We hope to inspire future work on regularizing tensor representations of multimodal data for robust prediction in the presence of imperfect data. Acknowledgements PPL, ZL, and LM are partially supported by the National Science Foundation (Award #1750439 and #1722822) and Samsung. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of Samsung and NSF, and no official endorsement should be inferred. YHT and RS are supported in part by DARPA HR00111990016, AFRL FA8750-18-C0014, NSF IIS1763562, Apple, and Google focused award. QZ is supported by JSPS KAKENHI (Grant No. 17K00326). We also acknowledge NVIDIA’s GPU support and the anonymous reviewers for their constructive comments. 1574 References Boris Alexeev, Michael A. Forbes, and Jacob Tsimerman. 2011. Tensor rank: Some lower and upper bounds. CoRR, abs/1102.0072. Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. 2014. Tensor decompositions for learning latent variable models. J. Mach. Learn. Res., 15(1):2773–2832. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answering. In International Conference on Computer Vision (ICCV). M.D. Atkinson and S. Lloyd. 1980. Bounds on the ranks of some 3-tensors. Linear Algebra and its Applications, 31:19 – 31. E. Ballico. 2014. An upper bound for the real tensor rank and the real symmetric tensor rank in terms of the complex ranks. Linear and Multilinear Algebra, 62(11):1546–1552. Lei Cai, Zhengyang Wang, Hongyang Gao, Dinggang Shen, and Shuiwang Ji. 2018. Deep adversarial learning for multi-modality missing data completion. In KDD ’18, pages 1158–1166. J. Douglas Carroll and Jih-Jie Chang. 1970. Analysis of individual differences in multidimensional scaling via an n-way generalization of “eckart-young” decomposition. Psychometrika, 35(3):283–319. Yi Chang, Luxin Yan, Houzhang Fang, Sheng Zhong, and Zhijun Zhang. 2017. Weighted low-rank tensor recovery for hyperspectral image restoration. CoRR, abs/1709.00192. Xiai Chen, Zhi Han, Yao Wang, Qian Zhao, Deyu Meng, Lin Lin, and Yandong Tang. 2017. A general model for robust tensor factorization with unknown noise. CoRR, abs/1705.06755. Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. 2018. Embodied Question Answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e M.F. Moura, Devi Parikh, and Dhruv Batra. 2017. Visual Dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Gilles Degottex, John Kane, Thomas Drugman, Tuomo Raitio, and Stefan Scherer. 2014. Covarep - a collaborative voice analysis repository for speech technologies. In ICASSP. IEEE. Jean-Benoit Delbrouck and St´ephane Dupont. 2017. Multimodal compact bilinear pooling for multimodal neural machine translation. CoRR, abs/1703.08084. Haiyan Fan, Yunjin Chen, Yulan Guo, Hongyan Zhang, and Gangyao Kuang. 2017. Hyperspectral image restoration using low-rank tensor recovery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, PP:1–16. Shmuel Friedland and Lek-Heng Lim. 2014. Computational complexity of tensor nuclear norm. CoRR, abs/1410.6072. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847. Xinbo Gao, Yimin Yang, Dacheng Tao, and Xuelong Li. 2009. Discriminative optical flow tensor for video semantic analysis. Computer Vision and Image Understanding, 113(3):372 – 383. Special Issue on Video Analysis. Yue Gu, Kangning Yang, Shiyu Fu, Shuhong Chen, Xinyu Li, and Ivan Marsic. 2018. Multimodal affective analysis using hierarchical attention strategy with word-level alignment. In ACL. Shohei Hidaka and Chen Yu. 2010. Analyzing multimodal time series as dynamical systems. In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI ’10, pages 53:1–53:8, New York, NY, USA. ACM. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Shenglong Hu. 2014. Relations of the Nuclear Norms of a Tensor and its Matrix Flattenings. arXiv eprints, page arXiv:1412.2443. Qiuyuan Huang, Paul Smolensky, Xiaodong He, Li Deng, and Dapeng Oliver Wu. 2017. Tensor product generation networks. CoRR, abs/1709.09118. Soshi Iba, Christiaan J. J. Paredis, and Pradeep K. Khosla. 2005. Interactive multimodal robot programming. The International Journal of Robotics Research, 24(1):83–104. iMotions. 2017. Facial expression analysis. Jean Kossaifi, Zachary C. Lipton, Aran Khanna, Tommaso Furlanello, and Anima Anandkumar. 2017. Tensor regression networks. CoRR, abs/1707.08308. Karthik Lakshmanan, Patrick T. Sadtler, Elizabeth C. Tyler-Kabara, Aaron P. Batista, and Byron M. Yu. 2015. Extracting low-dimensional latent structure from time series in the presence of delays. Neural Computation, 27:1825–1856. 1575 Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-rank tensors for scoring dependency structures. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1381–1391. Paul Pu Liang, Ziyin Liu, Amir Zadeh, and LouisPhilippe Morency. 2018. Multimodal language analysis with recurrent multistage fusion. EMNLP. Zhun Liu, Ying Shen, Varun Bharadhwaj Lakshminarasimhan, Paul Pu Liang, AmirAli Bagher Zadeh, and Louis-Philippe Morency. 2018. Efficient lowrank multimodal fusion with modality-specific factors. In ACL. Zhen Long, Yipeng Liu, Longxi Chen, and Ce Zhu. 2018. Low rank tensor completion for multiway visual data. CoRR, abs/1805.03967. Rada Mihalcea. 2012. Multimodal sentiment analysis. In Proceedings of the 3rd Workshop in Computational Approaches to Subjectivity and Sentiment Analysis, WASSA ’12, pages 1–1, Stroudsburg, PA, USA. Association for Computational Linguistics. Louis-Philippe Morency, Rada Mihalcea, and Payal Doshi. 2011. Towards multimodal sentiment analysis: Harvesting opinions from the web. In Proceedings of the 13th international conference on multimodal interfaces, pages 169–176. ACM. Madhav Nimishakavi, Pratik Kumar Jawanpuria, and Bamdev Mishra. 2018. A dual framework for lowrank tensor completion. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 5484–5495. Curran Associates, Inc. Shruti Palaskar, Ramon Sanabria, and Florian Metze. 2018. End-to-end multimodal speech recognition. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Wenzhe Pei, Tao Ge, and Baobao Chang. 2014. Maxmargin tensor neural network for chinese word segmentation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 293–303. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Hai Pham, Paul Pu Liang, Thomas Manzini, LouisPhilippe Morency, and Barnabas Poczos. 2019. Found in translation: Learning robust joint representations by cyclic translations between modalities. AAAI. Xipeng Qiu and Xuanjing Huang. 2015. Convolutional neural tensor network architecture for communitybased question answering. In Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI’15, pages 1305–1311. AAAI Press. James Rossiter. 2011. Multimodal intent recognition for natural human-robotic interaction. Alexander I. Rudnicky. 2005. Multimodal Dialogue Systems, pages 3–11. Springer Netherlands, Dordrecht. Edward Schmerling, Karen Leung, Wolf Vollprecht, and Marco Pavone. 2017. Multimodal probabilistic model-based planning for human-robot interaction. CoRR, abs/1710.09483. Hendra Setiawan, Zhongqiang Huang, Jacob Devlin, Thomas Lamar, Rabih Zbib, Richard Schwartz, and John Makhoul. 2015. Statistical machine translation features with multitask tensor networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 31–41. Association for Computational Linguistics. Vin de Silva and Lek-Heng Lim. 2008. Tensor rank and the ill-posedness of the best low-rank approximation problem. SIAM J. Matrix Anal. Appl., 30(3):1084– 1127. Kihyuk Sohn, Wenling Shang, and Honglak Lee. 2014. Improved multimodal deep learning with variation of information. In NIPS. Vivek Srikumar and Christopher D Manning. 2014. Learning distributed representations for structured output prediction. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3266–3274. Curran Associates, Inc. Nitish Srivastava and Ruslan Salakhutdinov. 2014. Multimodal learning with deep boltzmann machines. JMLR, 15. Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Movieqa: Understanding stories in movies through question-answering. CoRR, abs/1512.02902. Luan Tran, Xiaoming Liu, Jiayu Zhou, and Rong Jin. 2017. Missing modalities imputation via cascaded residual autoencoder. In CVPR. F. Wu, Y. Liu, and Y. Zhuang. 2009. Tensor-based transductive learning for multimodality video semantic concept detection. IEEE Transactions on Multimedia, 11(5):868–878. 1576 Xiao Yang, Ersin Yumer, Paul Asente, Mike Kraley, Daniel Kifer, and C. Lee Giles. 2017. Learning to extract semantic structure from documents using multimodal fully convolutional neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Jiahong Yuan and Mark Liberman. 2008. Speaker identification on the scotus corpus. Journal of the Acoustical Society of America. Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2017. Tensor fusion network for multimodal sentiment analysis. In EMNLP, pages 1114–1125. Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016. Mosi: Multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. arXiv preprint arXiv:1606.06259. Qingchen Zhang, Laurence T. Yang, Xingang Liu, Zhikui Chen, and Peng Li. 2017. A tucker deep computation model for mobile multimedia feature learning. ACM Trans. Multimedia Comput. Commun. Appl., 13(3s):39:1–39:18.
2019
152
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1577–1583 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1577 Towards Lossless Encoding of Sentences Gabriele Prato∗ Mathieu Duchesneau Sarath Chandar Alain Tapp Mila, Universit´e de Montr´eal Abstract A lot of work has been done in the field of image compression via machine learning, but not much attention has been given to the compression of natural language. Compressing text into lossless representations while making features easily retrievable is not a trivial task, yet has huge benefits. Most methods designed to produce feature rich sentence embeddings focus solely on performing well on downstream tasks and are unable to properly reconstruct the original sequence from the learned embedding. In this work, we propose a near lossless method for encoding long sequences of texts as well as all of their sub-sequences into feature rich representations. We test our method on sentiment analysis and show good performance across all sub-sentence and sentence embeddings. 1 Introduction Compressing information by encoding it into a fixed size representation in such a way that perfect decoding is possible is challenging. Instead, most of the existing sentence encoding methods focus more on learning encoding such that the encoded representations are good enough for the downstream tasks. In this work, we focus on perfectly decodable encoding of sentences which will be very useful in designing good generative models that can generate longer sentences. Early efforts such as (Hinton and Salakhutdinov, 2006) have shown autoencoders to effectively yield compressed input representations. Pollack (1990) was the first to propose using autoencoders recursively. Such models have been shown to be useful for a multitude of tasks. Luong et al. (2013) use recursive neural networks and neural language models to better represent rare words ∗Corresponding author: [email protected] via morphemes. Socher et al. (2011a) use recursive autoencoders for paraphrase detection, learning sentence embeddings (Socher et al., 2010) and syntactic parsing. Socher et al. (2011b) also use a recursive autoencoder to build a tree structure based on error reconstruction. Additionally, Socher et al. (2012) use a matrix-vector RNN to learn semantic relationships present in natural language and show good performance on such task as well as sentiment classification. Then, Socher et al. (2013) introduced the Recursive Neural Tensor Network, trained on a their proposed Sentiment Treebank corpus to better deal with negating sub-sequences for better sentiment classification. Recently, Kokkinos and Potamianos (2017) proposed Structural Attention to build syntactic trees and improve even further performance on SST. Parse trees do alleviate the burden of learning the syntactic structure of text, but these methods limit the number of generated embeddings to the number of nodes in the parse tree. Our proposed method does not have such a restriction as all possible syntactic tree can be simultaneously represented by the architecture. Convolutional Neural Networks (LeCun et al., 1989) have been used in natural language processing as well. Convolutions work well for extracting low and high level text features and building sequence representations. Lai et al. (2015) proposed to use CNNs recurrently and show good performance on various language tasks. Zhang et al. (2015); Dos Santos and Gatti de Bayser (2014) both train CNNs on character level for sentiment analysis, while Johnson and Zhang (2014) work on word level. Kalchbrenner et al. (2014) propose a Dynamic Convolutional Neural Network for semantic modelling of sentences and apply their model to sentiment prediction. Our proposed model is very similar to 1D CNNs. In our case though, we use a multilayer perceptron in parallel 1578 instead of a kernel to extract meaningful information out of the layer’s input. Much progress has been made in recent years in the field of general purpose sentence embeddings. Fixed length representations of sentence wide context are learned with the objective of serving for a wide range of downstream tasks. Conneau et al. (2017) trained a bidirectional LSTM on the AllNLI natural language inference corpus (Bowman et al., 2015; Williams et al., 2017) producing embeddings that generalized well on the SentEval (Conneau and Kiela, 2018) benchmark. Following this trend, Subramanian et al. (2018) trained a GRU (Cho et al., 2014) on Skip-thought vectors (Kiros et al., 2015), neural machine translation, parsing and natural language inference to get even better downstream task results. More recently, Devlin et al. (2018); Liu et al. (2019b,a) use Transformers (Vaswani et al., 2017) to produce sentence wide context embeddings for each input token and get state-of-the-art results on multiple natural language processing tasks. Dai et al. (2019) improve the Transformer method by recursively applying it to fixed length segments of text while using a hidden state to model long dependencies. One downside to these sentence embedding generation methods is that the context is always sequence wide. Our proposed model computes a sentence embedding as well as an embedding for all possible sub-sentences of the sequence with sub-sentence wide context only. All embeddings generated throughout our architecture are constructed the same way and thus share the same properties. 2 Recursive Autoencoder We introduce our recursive autoencoding approach in this section. First we define our model’s architecture and how each encoding and decoding recursion is performed. We then describe how the model keeps track of the recursion steps, followed by a description of how the input is represented. We also explain the advantages of using the mean squared error loss for our method. Finally, we dive into the implementation details. 2.1 Model Architecture Our model is a recursive auto-encoder. Figure 1 shows an example of our architecture for a sequence of length three. The encoder takes an input sequence Figure 1: Example of our recursive autoencoder with an input sequence of length three. The encoder recursively takes two embeddings and outputs one until a single one is left and the decoder takes one embedding and outputs two until there are as many as in the original sequence. {x1, · · · , xn}, where n is the sequence length of the layer’s input, and outputs a sequence {y1, · · · , yn−1}. The same {y1, · · · , yn−1} is then used as input for the next recursion until the output sequence contains only a single element y1, the sentence embedding. The recursion performs the following operation: yi = MLPenc ([xi; xi+1]) ∀i ∈{1, · · · , n −1} (1) where MLPenc is a shared multilayer perceptron and [xi; xi+1] is the concatenation of the embeddings xi and xi+1. MLPenc is shared throughout all of the encoding recursion steps. For decoding, it is the inverse procedure of recursively transforming an input sequence {x1, · · · , xn} into an output sequence {y1, · · · , yn+1}: [yi; y′ i+1] = MLPdec (xi) ∀i ∈{1, · · · , n} (2) where MLPdec is the shared multilayer perceptron used by all decoding recursive steps and [yi; y′ i+1] is an embedding twice the size of xi, which we then split into two embeddings yi and y′ i+1, each of the same size as xi. Since we obtain two embeddings yi and y′ i+1 for each xi, we will have the following embeddings: y1, {y2, · · · , yn}, {y′ 2, · · · , y′ n} and y′ n+1. We merge the overlapping sets by computing the mean: yi = yi + y′ i 2 ∀i ∈{2, · · · , n} (3) 1579 and set yn+1 = y′ n+1. We now have a single set of embeddings {y1, · · · , yn+1}. Both max and mean functions gave similar results, hence we stick with mean throughout all experiments. The output embeddings are then used as input for the next decoding recursion until we get as many elements as the original input sequence. 2.2 Step Encoding To help the recursive autoencoder keep track of the number of recursive steps which were applied to an embedding, we concatenate to the input of MLPenc the number of the current recursive step as a scalar, starting from 1 for the first recursion, as well as a one-hot of that scalar with custom bucket sizes: {1, 2, 3-4, 5-7, . . .}. All buckets after 5-7 are also of size 3. We found this combination of both scalar and one-hot to give best results. When decoding, we also concatenate to the input of MLPdec this scalar and one-hot, but instead of increasing our recursive step count, we subtract one to it after each recursive decoding step. 2.3 Input Representation We use uncased GloVe embeddings (Pennington et al., 2014) of size 300 to represent the initial input sequence words, which are then passed through a learned resizing multilayer perceptron (MLPin) before given as input to the encoder. The output of the decoder is also passed through a different learned resizing multilayer perceptron (MLPout) to get back to the GloVe embedding size. We use a vocabulary of 337k words throughout all tasks. 2.4 Mean Squared Error To compute the loss between input GloVe embeddings and the output embeddings, we use the mean squared error (MSE) loss. Obtaining an MSE of 0 would mean our method is lossless, which would not necessarily be the case with the cross entropy loss. MSE also allows us to work with a vocabulary far larger than what is usually the case, as the common classification layer plus cross entropy loss setup tends to have issues with large vocabularies. 2.5 Implementation Details The two embeddings given as input to MLPenc are each of size demb, as is also its output embedding. Same for MLPdec, the input embedding is of size demb and the two output embeddings are each of size demb. Both multilayer perceptrons have one hidden layer of size 2 3demb, halfway between the input and output size. We apply LayerNorm (Lei Ba et al., 2016) on the output of each layers of the MLPs, followed by a ReLU activation. The input and output resizing modules MLPin and MLPout also have one hidden layer halfway the size of their input and output. They also use ReLU activations, except for MLPout’s last layer. No LayerNorm is used in these resizing components. We test four different demb embedding sizes in section 3.1. 3 Experiments In this section, we first present the autoencoding results. Then we present the results on sentiment analysis using our sentence encoding on the Stanford Sentiment Treebank dataset (Socher et al., 2013). 3.1 Autoencoding As a first experiment, we tested our model on the autoencoding task. Training was done on the BookCorpus (Zhu et al., 2015) dataset, comprising eleven thousand books and almost one billion words. At test time, we measured accuracy by computing the MSE distance between an output embedding and the entire vocabulary. We count an output embedding as “correct” if the closest embedding out of all the vocabulary of size 337k is its corresponding input embedding. For the autoencoder, we tried four embedding sizes: 300, 512, 1024 and 2048. In all cases, models are given GloVe embeddings of size 300 as input. They also all output embeddings of size 300. Reconstruction accuracy is shown for different sequence lengths in Figure 2. With an embedding size of 2048, the model is able to reproduce near perfectly sequences of up to 40 tokens. Longer sentences aren’t able to do better and have on average 39 correct tokens. This results in model accuracy linearly going down after a certain threshold, as can be seen in Figure 2. To demonstrate how good our model is at reconstruction, we trained a stacked LSTM on the same autoencoding task. Figure 2 shows performance of LSTM models for embedding sizes 300, 512 and 1024. All LSTMs have two encoder and two decoder layers. The 1024 variant seems to have reached a saturation point, as it performs similarly to the 512 version. All RAEs and LSTMs were trained for 20 epochs and models with same 1580 Figure 2: Accuracy comparison of different embedding sizes (300, 512, 1024 and 2048) for different sequence lengths. Left is our recursive autoencoder and right a stacked LSTM. An output embedding is counted as correct if the closest embedding out of all the vocabulary is its corresponding input embedding. Figure 3: Accuracy comparison of our RAE model versus a stacked LSTM for embedding sizes 512 and 1024. Models of same embedding size have the same capacity. embedding size have the same capacity. Figure 3 shows a better side by side comparison of the RAE and the LSTM for embedding sizes 512 and 1024. Table 1 shows the MSE loss of all models on the dev set after 20 epochs. The LSTM with an embedding size of 1024 only slightly achieves lower MSE than the RAE with embedding size 300. When the output and input embeddings don’t match as nearest, they are usually close. Figure 4 shows the gain in accuracy for the 1024 and 2048 variants when considering an output embedding as correct if the input embedding is in the five closest to the output, out of all the vocabulary. For the 1024 version, we see on average an increase in accuracy of 2.7%, while for the 2048 variant, the gain only starts to get noticeable for sequences longer than 30, with an overall average increase of 1.4%. Model demb MSE (dev) LSTM 300 0.0274 512 0.0231 1024 0.0191 RAE 300 0.0208 512 0.0124 1024 0.0075 2048 0.0019 Table 1: Mean squared error loss of stacked LSTMs and our RAE model for different embedding sizes. All models are trained on the autoencoding task for 20 epochs and models of same embedding size have the same capacity. MSE is computed on the BookCorpus dev set (Zhu et al., 2015), between the input GloVe embeddings (Pennington et al., 2014) and output embeddings. 3.2 Sentiment Analysis With strong autoencoding performance, one would think that features get deeply encoded into the representation, making it difficult to easily extract them back, which is crucial for a great number of tasks. To this end, we test our architecture on the sentiment analysis task. The Stanford Sentiment Treebank (Socher et al., 2013) is a sentiment classification task where each sample in the dataset is a sentence with its corresponding sentiment tree. Each node in the tree is human annotated, with the leaves representing the sentiment of the words, all the way up to the root node, representing the whole sequence. Comparison is usually done on a binary or five label classification task, ranging from negative to positive. Most models are usually by design only able to classify the root node, while our architecture al1581 Figure 4: Difference in accuracy when counting an output embedding as correct if the corresponding input embedding is in the five closest versus the closest. Comparison is done on our RAE model with embedding sizes 1024 and 2048. lows classification of every node in the tree. We use a linear layer on top of each embedding in the encoder to classify sentiment. We present in Table 2 results for fine-grained sentiment analysis on all nodes as well as comparison with recent state-of-the-art methods on binary sentiment classification of the root node. For the five class sentiment task, we compare our model with the original Sentiment Treebank results and beat all the models. In order to compare our approach with state-of-the-art methods, we also trained our model on the binary classification task with sole classification of the root node. Other presented models are GenSen (Subramanian et al., 2018) and BERTBASE (Devlin et al., 2018). Both these recent methods perform extremely well on multiple natural language processing tasks. We set the RAE embedding size demb to 1024. Larger embedding sizes did not improve the accuracy of our model for this task. In this setting, the RAE has 11M parameters, while the models we compare with, GenSen and BERTBASE, have respectively 100M and 110M parameters. Both our model and GenSen fail to beat the RNTN model for the SST-2 task. We see an improvement in accuracy when combining both methods’ embeddings, surpassing every model in the SST paper, while being close to BERTBASE’s performance. Training solely on sentiment classification had same performance as jointly training on the autoencoding task, as the latter had no impact on the sentiment analysis performance. Joint training though had a small impact on reconstruction. Model SST-5 (All) SST-2 (Root) NB 67.2 81.8 SVM 64.3 79.4 BiNB 71.0 83.1 VecAvg 73.3 80.1 RNN 79.0 82.4 MV-RNN 78.7 82.9 RNTN 80.7 85.4 RAE 81.07 83 GenSen 84.5 RAE + GenSen 86.43 BERTBASE 93.5 Table 2: SST-5 and SST-2 performance on all and root nodes respectively. Model results in the first section are from the Stanford Treebank paper (2013). GenSen and BERTBASE results are from (Subramanian et al., 2018) and (Devlin et al., 2018) respectively. 4 Conclusion & Future Work In this paper, we introduced a recursive autoencoder method for generating sentence and subsentence representations. Decoding from a single embedding and working with a 337k vocabulary, we manage to get near perfect reconstruction for sequences of up to 40 length and very good reconstruction for longer sequences. Capitalizing on our model’s architecture, we showed our method to perform well on sentiment analysis and more precisely its advantage when classifying sentiment trees. Continuing in the direction of training our model on different NLP tasks, we would like our representations to generalize well on downstream tasks while maintaining their reconstruction property. We would also like to further explore the usage of sub-sentence representations in natural language processing. Finally, we would like to learn our sentence embeddings’ latent space, similarly to Subramanian et al. (2018)’s method, so as to leverage our autoencoder’s strong reconstruction ability and generate very long sequences of text. Acknowledgments This research was enabled in part by support provided by Compute Canada (www. computecanada.ca). We would also like to thank Tom Bosc, Sandeep Subramanian, Sai Rajeswar and Chinnadhurai Sankar for their invaluable feedback. 1582 References Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. arXiv e-prints, page arXiv:1508.05326. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. arXiv e-prints, page arXiv:1409.1259. Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. arXiv preprint arXiv:1803.05449. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. arXiv eprints, page arXiv:1705.02364. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. arXiv e-prints, page arXiv:1901.02860. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv e-prints, page arXiv:1810.04805. Cicero Dos Santos and Maira Gatti de Bayser. 2014. Deep convolutional neural networks for sentiment analysis of short texts. G. E. Hinton and R. R. Salakhutdinov. 2006. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507. Rie Johnson and Tong Zhang. 2014. Effective Use of Word Order for Text Categorization with Convolutional Neural Networks. arXiv e-prints, page arXiv:1412.1058. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A Convolutional Neural Network for Modelling Sentences. arXiv e-prints, page arXiv:1404.2188. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-Thought Vectors. arXiv e-prints, page arXiv:1506.06726. Filippos Kokkinos and Alexandros Potamianos. 2017. Structural Attention Neural Networks for improved sentiment analysis. arXiv e-prints, page arXiv:1701.01811. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. 1989. Backpropagation applied to handwritten zip code recognition. Neural Comput., 1(4):541–551. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer Normalization. arXiv e-prints, page arXiv:1607.06450. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding. arXiv e-prints, page arXiv:1904.09482. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019b. Multi-Task Deep Neural Networks for Natural Language Understanding. arXiv e-prints, page arXiv:1901.11504. Thang Luong, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 104–113. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Jordan B. Pollack. 1990. Recursive distributed representations. Artificial Intelligence, 46(1):77 – 105. Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011a. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS’11, pages 801– 809, USA. Curran Associates Inc. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211. Association for Computational Linguistics. Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2010. Learning continuous phrase representations and syntactic parsing with recursive neural networks. In In Proceedings of the NIPS-2010 Deep Learning and Unsupervised Feature Learning Workshop. Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011b. Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP). 1583 Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642. Association for Computational Linguistics. Sandeep Subramanian, Sai Rajeswar Mudumba, Alessandro Sordoni, Adam Trischler, Aaron C Courville, and Chris Pal. 2018. Towards text generation with adversarially learned neural outlines. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7551–7563. Curran Associates, Inc. Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christopher J Pal. 2018. Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning. arXiv e-prints, page arXiv:1804.00079. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. arXiv e-prints, page arXiv:1706.03762. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2017. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. arXiv e-prints, page arXiv:1704.05426. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 649–657. Curran Associates, Inc. Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In arXiv preprint arXiv:1506.06724.
2019
153
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1584–1594 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1584 Open Vocabulary Learning for Neural Chinese Pinyin IME Zhuosheng Zhang1,2, Yafang Huang1,2, Hai Zhao1,2,∗ 1Department of Computer Science and Engineering, Shanghai Jiao Tong University 2Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University, Shanghai, China {zhangzs, huangyafang}@sjtu.edu.cn, [email protected] Abstract Pinyin-to-character (P2C) conversion is the core component of pinyin-based Chinese input method engine (IME). However, the conversion is seriously compromised by the ambiguities of Chinese characters corresponding to pinyin as well as the predefined fixed vocabularies. To alleviate such inconveniences, we propose a neural P2C conversion model augmented by an online updated vocabulary with a sampling mechanism to support open vocabulary learning during IME working. Our experiments show that the proposed method outperforms commercial IMEs and state-of-theart traditional models on standard corpus and true inputting history dataset in terms of multiple metrics and thus the online updated vocabulary indeed helps our IME effectively follows user inputting behavior. 1 Introduction Chinese may use different Chinese characters up to 20,000 so that it is non-trivial to type the Chinese character directly from a Latin-style keyboard which only has 26 keys (Zhang et al., 2018a). The pinyin as the official romanization representation for Chinese provides a solution that maps Chinese character to a string of Latin alphabets so that each character has a letter writing form of its own and users can type pinyin in terms of Latin letters to input Chinese characters into a computer. Therefore, converting pinyin to Chinese characters is the most basic module of all pinyinbased IMEs. As each Chinese character may be mapped to a pinyin syllable, it is natural to regard the Pinyinto-Character (P2C) conversion as a machine trans∗Corresponding author. This paper was partially supported by National Key Research and Development Program of China (No. 2017YFB0304100) and Key Projects of National Natural Science Foundation of China (U1836222 and 61733011). lation between two different languages, pinyin sequences and Chinese character sequences (namely Chinese sentence). Actually, such a translation in P2C procedure is even more straightforward and simple by considering that the target Chinese character sequence keeps the same order as the source pinyin sequence, which means that we can decode the target sentence from left to right without any reordering. Meanwhile, there exists a well-known challenge in P2C procedure, too much ambiguity mapping pinyin syllable to character. In fact, there are only about 500 pinyin syllables corresponding to ten thousands of Chinese characters, even though the amount of the commonest characters is more than 6,000 (Jia and Zhao, 2014). As well known, the homophone and the polyphone are quite common in the Chinese language. Thus one pinyin may correspond to ten or more Chinese characters on the average. However, pinyin IME may benefit from decoding longer pinyin sequence for more efficient inputting. When a given pinyin sequence becomes longer, the list of the corresponding legal character sequences will significantly reduce. For example, IME being aware of that pinyin sequence bei jing can be only converted to either 背景(background) or 北京(Beijing) will greatly help it make the right and more efficient P2C decoding, as both pinyin bei and jing are respectively mapped to dozens of difference single Chinese characters. Table 1 illustrates that the list size of the corresponding Chinese character sequence converted by pinyin sequence bei jing huan ying ni (北京欢迎你, Welcome to Beijing) is changed according to the different sized source pinyin sequences. To reduce the P2C ambiguities by decoding longer input pinyin sequence, Chinese IMEs may often utilize word-based language models since character-based language model always suffers 1585 Pinyin seq. conbei jing huan ying ni sists of 1 syllable 被 敬 环 英 你 你 你 北 北 北 静 换 颖 睨 呗 井 还 迎 迎 迎 逆 杯 京 京 京 幻 影 拟 背 经 欢 欢 欢 应 尼 Pinyin seq. conbei jing huan ying ni sists of 2 syllables 北 北 北京 京 京 幻影 你 背景 欢 欢 欢迎 迎 迎 你 Pinyin seq. conbei jing huan ying ni sists of 5 syllables 北 北 北京 京 京欢 欢 欢迎 迎 迎你 你 你 Table 1: The shorter the pinyin sequence is, the more character sequences will be mapped. from the mapping ambiguity. However, the effect of the work in P2C will be undermined with quite restricted vocabularies. The efficiency of IME conversion depends on the sufficiency of the vocabulary and previous work on machine translation has shown a large enough vocabulary is necessary to achieve good accuracy (Jean et al., 2015). In addition, some sampling techniques for vocabulary selection are proposed to balance the computational cost of conversion (Zhou et al., 2016; Wu et al., 2018). As IMEs work, users inputting style may change from time to time, let alone diverse user may input quite diverse contents, which makes a predefined fixed vocabulary can never be sufficient. For a convenient solution, most commercial IMEs have to manually update their vocabulary on schedule. Moreover, the training for word-based language model is especially difficult for rare words, which appear sparsely in the corpus but generally take up a large share of the dictionary. To well handle the open vocabulary learning problem in IME, in this work, we introduce an online sequence-to-sequence (seq2seq) model for P2C and design a sampling mechanism utilizing our online updated vocabulary to enhance the conversion accuracy of IMEs as well as speed up the decoding procedure. In detail, first, a characterenhanced word embedding (CWE) mechanism is proposed to represent the word so that the proposed model can let IME generally work at the word level and pick a very small target vocabulary for each sentence. Second, every time the user makes a selection contradicted the prediction given by the P2C conversion module, the module will update the vocabulary accordingly. Our evaluation will be performed on three diverse corpora, including two which are from the real user inputting history, for verifying the effectiveness of the proposed method in different scenarios. The rest of the paper is organized as follows: Section 2 discusses relevant works. Sections 3 and 4 introduce the proposed model. Experimental results and the model analysis are respectively in Sections 5 and 6. Section 7 concludes this paper. 2 Related Work To effectively utilize words for IMEs, many natural language processing (NLP) techniques have been applied. Chen (2003) introduced a joint maximum n-gram model with syllabification for grapheme-to-phoneme conversion. Chen and Lee (2000) used a trigram language model and incorporated word segmentation to convert pinyin sequence to Chinese word sequence. Xiao et al. (2008) proposed an iterative algorithm to discover unseen words in corpus for building a Chinese language model. Mori et al. (2006) described a method enlarging the vocabulary which can capture the context information. For either pinyin-to-character for Chinese IMEs or kana-to-kanji for Japanese IMEs, a few language model training methods have been developed. Mori et al. (1998) proposed a probabilistic based language model for IME. Jiampojamarn et al. (2008) presented online discriminative training. Lin and Zhang (2008) proposed a statistic model using the frequent nearby set of the target word. Chen et al. (2012) used collocations and kmeans clustering to improve the n-pos model for Japanese IME. Jiang et al. (2007) put forward a PTC framework based on support vector machine. Hatori and Suzuki (2011) and Yang et al. (2012) respectively applied statistic machine translation (SMT) to Japanese pronunciation prediction and Chinese P2C tasks. Chen et al. (2015); Huang et al. (2018) regarded the P2C as a translation between two languages and solved it in neural machine translation framework. All the above-mentioned work, however, still rely on a predefined fixed vocabulary, and IME users have no chance to refine their own dictionary through a user-friendly way. Zhang et al. (2017) is mostly related to this work, which also offers an online mechanism to adaptively update user vocabulary. The key difference between their work 1586 and ours lies on that this work presents the first neural solution with online vocabulary adaptation while (Zhang et al., 2017) sticks to a traditional model for IME. Recently, neural networks have been adopted for a wide range of tasks (Li et al., 2019; Xiao et al., 2019; Zhou and Zhao, 2019; Li et al., 2018a,b). The effectiveness of neural models depends on the size of the vocabulary on the target side and previous work has shown that vocabularies of well over 50K word types are necessary to achieve good accuracy (Jean et al., 2015) (Zhou et al., 2016). Neural machine translation (NMT) systems compute the probability of the next target word given both the previously generated target words as well as the source sentence. Estimating this conditional distribution is linear in the size of the target vocabulary which can be very large for many translation tasks. Recent NMT work has adopted vocabulary selection techniques from language modeling which do not directly generate the vocabulary from all the source sentences (L’Hostis et al., 2016; Wu et al., 2018). The latest studies on deep neural network prove the demonstrable effects of word representation on various NLP tasks, such as language modeling (Verwimp et al., 2017), question answering (Zhang and Zhao, 2018; Zhang et al., 2018b), dialogue systems (Zhang et al., 2018c; Zhu et al., 2018) and machine translation (Wang et al., 2017a,b, 2018; Wang et al., 2018; Chen et al., 2018). As for improved word representation in IMEs, Hatori and Suzuki (2011) solved Japanese pronunciation inference combining word-based and character-based features within SMT-style framework to handle unknown words. Neubig et al. (2013) proposed character-based SMT to handle sparsity. Okuno and Mori (2012) introduced an ensemble model of word-based and character-based models for Japanese and Chinese IMEs. All the above-mentioned methods used similar solution about character representation for various tasks. Our work takes inspiration from (Luong and Manning, 2016) and (Cai et al., 2017). The former built a novel representation method to tackle the rare word for machine translation. In detail, they used word representation network with characters as the basic input units. Cai et al. (2017) presented a greedy neural word segmenter with balanced word and character embedding inputs. In the meantime, high-frequency word embeddings are attached to character embedding via average pooling while low-frequency words are computed from character embedding. Our embeddings also contain different granularity levels of embedding, but the word vocabulary is capable of being updated in accordance with users’ inputting choice during IME working. In contrast, (Cai et al., 2017) build embeddings based on the word frequency from a fixed corpus. 3 Our Models For a convenient reference, hereafter a character in pinyin language also refers to an independent pinyin syllable in the case without causing confusion, and word means a pinyin syllable sequence which may correspond to a true word written in Chinese characters. As illustrated in Figure 1, the core of our hybrid P2C is a seq2seq model (Cho et al., 2014) in terms of the encoder-decoder framework. Given a pinyin sequence X and a Chinese character sequence Y , the encoder of our neural P2C model utilizes a network for pinyin representation in which both word-level and character-level embedding are exploited, and the decoder is to generate the Chinese target sequence which maximizes P(Y |X) using maximum likelihood training. Starting from an initial vocabulary with indicator from each turn of the user inputting choice, the online learning module helps update the word vocabulary by minimizing empirical prediction risk. 3.1 Pinyin-Character Parallel Corpus Pinyin-character parallel corpus can be conveniently generated by automatically annotating Chinese character text with pinyin as introduced in (Yang et al., 2012). Using standard Chinese word segmentation methods, we may segment both character and pinyin text into words with the same segmentation for each sentence pair. 3.2 Encoder-Decoder The encoder is a bi-directional long shortterm memory (LSTM) network (Hochreiter and Schmidhuber, 1997). The vectorized inputs are fed to forward and backward LSTMs to obtain the internal representation of two directions. The output for each input is the concatenation of the two vectors from both directions. Our decoder is based on the global attentional models proposed by Lu1587 Character-enhanced word embeddings Decoder Encoder Attention Layer h c h h y Vocabulary & BiGRU WE ... 北京欢迎你 (Welcome to Beijing) 北 north 京 capital 欢 fun 迎 welcome 你 you CE CWE for 北京欢迎你 Add new words 北京人, beijingren ⋯ 北, bei 被, bei 北京, beijing 备选, beixuan 背景, beijing 北, bei 被,bei 北京 ,beijing ⋯ 备选 ,beixuan Input Vectors pinyin embeddings 背景, beijing s t t t t ⋯ 北京人,beijingren ⋯ Figure 1: Architecture of the proposed Neural-based Chinese Input Method hs ct yt ht at ht Figure 2: Architecture of the attention-based encoderdecoder model. ong et al. (2015) to consider the hidden states of the encoder when deriving the context vector. The probability is conditioned on a distinct context vector for each target word. The context vector is computed as a weighted sum of previous hidden states. The probability of each candidate word as being the recommended one is predicted using a softmax layer over the inner-product between source embeddings and candidate target characters. Figure 2 shows the architecture. 4 Online P2C Learning with Vocabulary Adaptation As the core of Chinese IME, P2C conversion has been formulized into a seq2seq model as machine translation between pinyin and character sequences, there are still a few differences between P2C converting and standard machine translation. 1) Considering both pinyin syllables and Chinese characters are segmented into singlecharacter word as Figure 3a, there is a one-toone mapping between any character and its corresponding pinyin syllable without word reordering, while typical machine translation does not enjoy such benefits and has to perform careful word reordering explicitly or implicitly. 2) As Chinese language is always sensitive to the segmentation scheme, in the writing of either the Chinese character or the pinyin, P2C as NMT may suffer from alignment mismatch on both sides like Figure 3b or benefit a lot from perfect one-to-one alignment like Figure 3c, while typical machine translation is seldom affected by such segmentation alignment. 3) P2C as a working component of IME, every time it returns a list of Chinese character sequence predictions, user may indicate which one is what he or she actually expects to input. To speed up the 1588 inputting, IME always tries to rank the user’s intention at the top-1 position. So does IME, we say there is a correct conversion or prediction. Different from machine translation job, users’ inputting choice will always indicate the ‘correct’ prediction right after IME returns the list of its P2C conversion results. (a) (b) (c) Figure 3: Different segmentations decide different alignments Therefore, IME working mode implies an online property, we will let our neural P2C model also work and evaluate in such a way. Meanwhile, online working means that our model has to track the continuous change of users’ inputting contents, which is equally a task about finding new words in either pinyin sequence or character sequence. However, the word should be obtained through the operation of word segmentation. Note that as we have discussed above, segmentation over pinyin sequence is also necessary to alleviate the ambiguity of pinyin-to-character mapping. Thus the task here for IME online working actually requires an online word segmentation algorithm if we want to keep the aim of open vocabulary learning. Our solution to this requirement is adopting a vocabulary-based segmentation approach, namely, the maximum matching algorithm which greedily segments the longest matching in the given vocabulary at the current segmentation point of a given sequence. Then adaptivity of the segmentation thus actually relies on the vocabulary online updating. Algorithm 1 gives our online vocabulary updating algorithm for one-turn IME inputting. Note the algorithm maintains a pinyin-character bilingual vocabulary. Collecting the user’s inputting choices through IME, our P2C model will perform online training over segmented pinyin and character sequences with the updated vocabulary. The updating procedure introduces new words by comparing the user’s choice and IME’s top-1 prediction. The longest mismatch n-gram characters will be added as new word. Algorithm 1 Online Vocabulary Updating Algorithm Input: • Vocabulary: V = {(Pyi, Chi)|i = 1, 2, 3, · · · }; • Input pinyin sequence: Py = {pyi|i = 1, 2, 3, · · · }; • IME predicted top-1 character sequence: Cm = {cmi|i = 1, 2, 3, · · · }; • User choosing character sequence: Cu = {cui|i = 1, 2, 3, · · · }. Output: • The Updated Vocabulary: ˆV . 1:  Adding new words 2: for n = 6 to 2 do 3: Compare n-gram of Cu and Cm 4: if Mismatch Ch is found // both the first and last characters are different at least then 5: if Ch is not in ˆV then 6: V = V ∪{Ch} 7: end if 8: end if 9: if no mismatch is found then 10: break 11: end if 12: end for 13: return ˆV ; We adopt a hybrid mechanism to balance both words and characters representation, namely, Character-enhanced Word Embedding (CWE). In the beginning, we keep an initial vocabulary with the most frequent words. The words inside the vocabulary are represented as enhanced-embedding, and those outside the list are computed from character embeddings. A pre-trained word2vec model (Mikolov et al., 2013) is generated to represent the word embedding WE(w)(w ∈ˆV ). At the same time we feed all characters of each word to a bi-gated recurrent unit (bi-GRU) (Cho et al., 2014) to compose the character level representation CE(w) (w = {ci|i = 1, 2, 3, · · · }). The enhanced embedding CWE(w) is to straightforwardly integrate word embedding and character embedding by element-wise multiplication, CWE(w) = WE(w) ⊙CE(w) 1589 5 Target Vocabulary Selection In this section, we aim to prune the target vocabulary ˆV as small as possible to reduce the computing time. Our basic idea is to maintain a separate and small vocabulary ˆV for each sentence so that we only need to compute the probability distribution over a small vocabulary for each sentence. We first generate a sentence-level vocabulary Vs to be one part of our ˆV , which includes the mapped Chinese words of each pinyin in the source sentence. As the bilingual vocabulary V consists of the pinyin and Chinese word pair of all the words that ever appeared, it is natural to use a prefix maximum matching algorithm to obtain a sorted list of relevant candidate translations D(x) = [Ch1, Ch2, ...] for the source pinyin. Thus, we generate a target vocabulary Vs for a sentence x = (Py1, Py2, ...) by merging all the candidates of all pinyin. In order to cover target un-aligned functional words, we also need top n most common target words Vc. In training procedure, the target vocabulary ˆV for a sentence x needs to include the target words Vt in the reference y, ˆV = Vs ∪Vc ∪Vy. In decoding procedure, the ˆV may only contain two parts, ˆV = Vs ∪Vc. 6 Experiment 6.1 Datasets and Evaluation Metrics We adopt two corpora for evaluation. The People’s Daily corpus is extracted from the People’s Daily from 1992 to 1998 by Peking University (Emerson, 2005). The bilingual corpus can be straightforwardly produced by the conversion proposed by (Yang et al., 2012). Contrast to the style of the People’s Daily, the TouchPal corpus (Zhang et al., 2017) is a large scale of user chat history collected by TouchPal IME, which are more colloquial. Hence, we use the latter to simulate user’s chatting input to verify our online model’s adaptability to different environments. The test set size is 2,000 MIUs in both corpora. Table 2 shows the statistics of two corpora1. Two metrics are used for our evaluation by following previous work: Maximum Input Unit (MIU) Accuracy and KeyStroke Score (KySS) (Jia and Zhao, 2013). The former measures the con1The two corpora along with our codes are available at https://github.com/cooelf/OpenIME . version accuracy of MIU, which is defined as the longest uninterrupted Chinese character sequence inside a sentence. As the P2C conversion aims to output a rank list of corresponding character sequences candidates, the top-K MIU accuracy means the possibility of hitting the target in the first K predict items. We will follow the definition of (Zhang et al., 2017) about top-K accuracy. The KySS quantifies user experience by using keystroke count. An IME with higher KySS is supposed to perform better. For an ideal IME, there will be KySS = 1. 6.2 Settings IME works giving a list of character sequence candidates for user choosing. Therefore, measuring IME performance is equivalent to evaluating such a rank list. In this task, we select 5 converted character sequence candidates for each pinyin sequence. Given a pinyin sequence and candidate characters, our model is designed to rank the characters in an appropriate order. Here is the model setting we used: a) pretrained word embeddings were generated on the People’s Daily corpus; b) the recurrent neural networks for encoder and decoder have 3 layers and 500 cells, and the representation networks have 1 layer; c) the initial learning rate is 1.0, and we will halve the learning rate every epoch after 9 epochs; d) dropout is 0.3; e) the default frequency filter ratio for CWE establishment is 0.9. The same setting is applied to all models. For a balanced treatment over both corpora, we used baseSeg (Zhao et al., 2006) to segment all text, then extract all resulted words into the iniChinese Pinyin PD # MIUs 5.04M # Word 24.7M 24.7M # Vocab 54.3K 41.1K # Target Vocab (train) 2309 # Target Vocab (dec) 2168 TP # MIUs 689.6K # Word 4.1M 4.1M # Vocab 27.7K 20.2K # Target Vocab (train) 2020 # Target Vocab (dec) 2009 Table 2: MIUs count, word count and vocab size statistics of our training data. PD refers to the People’s Daily, TP is TouchPal corpus. 1590 System ED PD TP Top1 Top5 Top10 Top1 Top5 Top10 Existing P2C Google IME 70.9 78.3 82.3 57.5 63.8 69.3 OMWA 55.0 63.7 70.2 19.7 24.8 27.7 On-OMWA 64.4 72.9 77.9 57.1 71.1 80.9 Our P2C Base P2C 200 53.2 64.7 70.3 46.8 68.8 75.7 On-P2C 200 68.1 77.3 78.2 69.8 88.7 89.3 On-P2C (bi) 200 70.5 79.8 80.1 71.0 89.2 89.5 On-P2C (bi) 300 70.8 80.5 81.2 71.9 89.6 90.6 On-P2C (bi) 400 71.3 80.1 81.3 71.7 89.7 90.3 On-P2C (bi) 500 69.9 78.2 81.0 70.7 89.2 89.8 Table 3: Top-K accuracies on the People’s Daily (PD) , TouchPal (TP) corpora. ED refers to embedding dimension. The best results are in bold. tial vocabulary for online evaluation. We train the base P2C models for 13 epochs with plain stochastic gradient descent on the People’s Daily corpus with 32 batch size, and the online training process runs 25 epochs with 1 batch size. In practical application, we perform online training for once every 64 instances are inputs to control costs. 6.3 Results We compare our P2C conversion system with two baseline systems, Google IME 2 and Offline and Online models for Word Acquisition (OMWA, On-OMWA)(Zhang et al., 2017), and the results are shown in Table 3. On the People’s Daily corpus, our online model (On-P2C) outperforms the best model in (Zhang et al., 2017) by +3.72% top-1 MIU accuracy. The +14.94 improvement over the base P2C conversion module demonstrates that online learning vocabulary is effective. The using of bidirection LSTM encoder produces a notable boost of +2.41% accuracy. Our P2C model seizes a slight but significant improvement when tuning the dimension of CWE; our model gives 71.32% top-1 MIU accuracy. The performance on TouchPal corpus is similar and even more obvious; our best setting achieves 14.35% improvements compared to the best baseline. The P2C module of IME outputs a rank list, and then the IME once displays five candidates by default. If users cannot find the target character in the top 5 candidates, they have to click the Page 2The Google IME is the only commercial Chinese IME providing a debuggable API on the market now. Down button to navigate more candidates, which involve additional keystroke expenses for users. Therefore, we list the top-5 accuracy contrast to all baselines with top-10 results, and the comparison indicates the noticeable advancement of our P2C model. On TouchPal corpus, our model with the best setting achieves 89.7% accuracy, surpassing all the baselines. 7 Analysis 7.1 Effects of Online Updated Vocaburay Figure 5 shows the changes of the MIU accuracy during the training process. For both top-1 and top-5 MIU accuracy, models with online vocabulary updating significantly outperform those without updating throughout the entire training. Especially, online P2C gives top-1 MIU accuracy comparable to top-5 MIU accuracy given by the base P2C module, which suggests a great inputting efficiency improvement from introducing the online updating mechanism. Figure 4 expounds the adaptivity of our online P2C, in which we feed a joint corpus that is extracted from test corpora of the People’s Daily and Touchpal to the base P2C model and record the top-1 MIU accuracy per group after 2 epochs online vocabulary learning with batch size 1. We see that online P2C distinctly adapts the corpus change at the joint part. On the contrary, the base P2C which works offline performs stably only on its in-domain segments. 1591 Figure 4: Top-1 accuracy on an interlaced joint Corpus. P: the People’s daily segment, T: Touchpal segment. Filter Ratio 0 0.3 0.6 0.9 1.0 Top-5 Accuracy(valid set) 66.4 68.3 84.3 89.7 87.5 Top-5 Accuracy(test set) 66.3 68.1 83.9 89.6 87.1 Table 4: Top-5 accuracies of P2C after filtering specific ratio of words from vocabulary. Models the People’s Daily TouchPal Google IME 0.7535 0.6465 OMWA 0.6496 0.4489 On-OMWA 0.7115 0.7226 Base P2C 0.6922 0.7910 On-P2C 0.8301 0.8962 Table 5: User experience in terms of KySS Figure 5: Training curves of top-1 and top-5 accuracy on TouchPal. 7.2 Effects of Vocaburay Selection As an adaptive vocabulary is used in our decoder, it may result in a very large vocabulary to encumber the decoder with efficiency. Therefore, in practice, we need to control the size of the vocabulary for acceptable decoding speed. However, pruning the vocabulary in any way will surely hurt the performance due to all items in the adaptive vocabulary added with a reason. Figure 6 illustrates the relation between accuracy and decoding speed. The accuracies nearly do not get decreased with high enough decoding speed when only taking 88.9% full vocabulary in our system. Vocabulary Size (%) Vocab size Figure 6: MIU accuracy versus decoding time on CPU. 7.3 Effects of Word Filtering for CWE building As we mentioned in Section 4, P2C conversion quality depends on the CWE mechanism which will benefit from an appropriate filtration ratio. As shown in Table 4, when the filter ratio equals to 0.9, the accuracy reaches the top. We notice two observations. First, pure word-level representation is more efficient for P2C tasks than character-level which only achieves 66.3% accuracy. Second, omitting partial low-frequency word is instrumental in establishing word-level embedding. Actually, when building word embeddings, rare words behave no more than noise. If the rare words are not initialized properly, they would also bias the whole word embeddings. Therefore, we more incline to make character-level embedding to represent a rare word, and build CWE embeddings for others. 1592 7.4 User Experience Jia and Zhao (2013) proposed that the user-IME interaction contains three steps: pinyin input, candidate index choice and page turning. In Table 3, the 89.7% top-5 accuracy on TouchPal means that users have nearly 90% possibilities to straightly obtain the expected inputs in the first page (usually 5 candidates per page for most IME interface setting), so that user experiment using IMEs can be directly measured by KySS. Table 5 shows the mean KySS of various models. The results indicate that our P2C conversion module further facilitates the interaction. 8 Conclusion This paper presents the first neural P2C converter for pinyin-based Chinese IME with open vocabulary learning as to our best knowledge. We adopt an online working-style seq2seq model for the concerned task by formulizing it as a machine translation from pinyin sequence to Chinese character sequence. In addition, we propose an online vocabulary updating algorithm for further performance enhancement by tracking users behavior effectively. The evaluation on the standard linguistic corpus and true inputting history show the proposed methods indeed greatly improve user experience in terms of diverse metrics compared to commercial IME and state-of-the-art traditional model. References Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, and Feiyue Huang. 2017. Fast and accurate neural word segmentation for Chinese. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 608–615. Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, and Tiejun Zhao. 2018. Syntax-directed attention for neural machine translation. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 4792–4799. Long Chen, Xianchao Wu, and Jingzhou He. 2012. Using collocations and k-means clustering to improve the n-pos model for japanese ime. In Proceedings of the Second Workshop on Advances in Text Input Methods, pages 45–56. Shenyuan Chen, Rui Wang, and Hai Zhao. 2015. Neural network language model for Chinese pinyin input method engine. In Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation (PACLIC), pages 455–461. Stanley F. Chen. 2003. Conditional and joint models for grapheme-to-phoneme conversion. In Eighth European Conference on Speech Communication and Technology, pages 2033–2036. Zheng Chen and Kai Fu Lee. 2000. A new statistical approach to Chinese pinyin input. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics (COLING), pages 241– 247. Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734. Thomas Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proceedings of the fourth SIGHAN workshop on Chinese language Processing, pages 123–133. Jun Hatori and Hisami Suzuki. 2011. Japanese pronunciation prediction as phrasal statistical machine translation. In Proceedings of 5th International Joint Conference on Natural Language Processing (IJCNLP), pages 993–1004. Yafang Huang, Zuchao Li, Zhuosheng Zhang, and Hai Zhao. 2018. Moon IME: neural-based chinese pinyin aided input method with customizable association. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), System Demonstration, pages 140–145. Sebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 1–10. Zhongye Jia and Hai Zhao. 2013. Kyss 1.0: a framework for automatic evaluation of Chinese input method engines. In Proceedings of the Sixth International Joint Conference on Natural Language Processing (IJCNLP), pages 1195–1201. Zhongye Jia and Hai Zhao. 2014. A joint graph model for pinyin-to-chinese conversion with typo correction. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL), pages 1512–1523. Sittichai Jiampojamarn, Colin Cherry, and Grzegorz Kondrak. 2008. Joint processing and discriminative training for letter-to-phoneme conversion. In Proceedings of ACL-08: HLT, pages 905–913. 1593 Wei Jiang, Yi Guan, Xiao Long Wang, and Bing Quan Liu. 2007. Pinyin to character conversion model based on support vector machines. Journal of Chinese Information Processing, 21(2):100–105. Gurvan L’Hostis, David Grangier, and Michael Auli. 2016. Vocabulary selection strategies for neural machine translation. arXiv preprint arXiv:1610.00072. Zuchao Li, Jiaxun Cai, Shexia He, and Hai Zhao. 2018a. Seq2seq dependency parsing. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 3203– 3214. Zuchao Li, Shexia He, Jiaxun Cai, Zhuosheng Zhang, Hai Zhao, Gongshen Liu, Linlin Li, and Luo Si. 2018b. A unified syntax-aware framework for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2401–2411. Zuchao Li, Shexia He, Hai Zhao, Yiqing Zhang, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. 2019. Dependency or span, end-to-end uniform semantic role labeling. arXiv preprint arXiv:1901.05280. Bo Lin and Jun Zhang. 2008. A novel statistical Chinese language model and its application in pinyinto-character conversion. In Proceedings of the 17th ACM conference on Information and knowledge management, pages 1433–1434. Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1054–1063. Minh Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1412– 1421. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Shinsuke Mori, Daisuke Takuma, and Gakuto Kurata. 2006. Phoneme-to-text transcription system with an infinite vocabulary. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics (ACL-COLING), pages 729–736. Shinsuke Mori, Masatoshi Tsuchiya, Osamu Yamaji, and Makoto Nagao. 1998. Kana-Kanji conversion by a stochastic model. Information Processing Society of Japan (IPSJ), pages 2946–2953. Graham Neubig, Taro Watanabe, Shinsuke Mori, and Tatsuya Kawahara. 2013. Machine translation without words through substring alignment. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL), pages 165– 174. Yoh Okuno and Shinsuke Mori. 2012. An ensemble model of word-based and character-based models for Japanese and Chinese input method. In Proceedings of the Second Workshop on Advances in Text Input Methods, pages 15–28. Lyan Verwimp, Joris Pelemans, Hugo Van Hamme, and Patrick Wambacq. 2017. Character-word LSTM language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 417– 427. Rui Wang, Andrew Finch, Masao Utiyama, and Eiichiro Sumita. 2017a. Sentence embedding for neural machine translation domain adaptation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 560–566. Association for Computational Linguistics. Rui Wang, Masao Utiyama, Andrew Finch, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2018. Sentence selection and weighting for neural machine translation domain adaptation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(10):1727–1741. Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017b. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1482–1488. Association for Computational Linguistics. Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2018. Dynamic sentence sampling for efficient training of neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 298–304. Yu Wu, Wei Wu, Dejian Yang, Can Xu, Zhoujun Li, and Ming Zhou. 2018. Neural response generation with dynamic vocabularies. In Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), pages 5594–5601. Fengshun Xiao, Jiangtong Li, Hai Zhao, Rui Wang, and Kehai Chen. 2019. Lattice-Based Transformer Encoder for Neural Machine Translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL). Jinghui Xiao, Bing Quan Liu, and XiaoLong WANG. 2008. A self-adaptive lexicon construction algorithm for Chinese language modeling. Acta Automatica Sinica, 34(1):40–47. 1594 Shaohua Yang, Hai Zhao, and Bao-liang Lu. 2012. A machine translation approach for Chinese wholesentence pinyin-to-character conversion. In Proceedings of the 26th Asian Pacific conference on language and information and computation (PACLIC), pages 333–342. Xihu Zhang, Chu Wei, and Hai Zhao. 2017. Tracing a loose wordhood for Chinese input method engine. arXiv preprint arXiv:1712.04158. Zhuosheng Zhang, Yafang Huang, and Hai Zhao. 2018a. Subword-augmented embedding for cloze reading comprehension. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 1802–1814. Zhuosheng Zhang, Yafang Huang, Pengfei Zhu, and Hai Zhao. 2018b. Effective character-augmented word embedding for machine reading comprehension. In Proceedings of the Seventh CCF International Conference on Natural Language Processing and Chinese Computing (NLPCC), pages 27–39. Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, and Hai Zhao. 2018c. Modeling multi-turn conversation with deep utterance aggregation. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 3740–3752. Zhuosheng Zhang and Hai Zhao. 2018. One-shot learning for question-answering in gaokao history challenge. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 449–461. Hai Zhao, Chang-Ning Huang, Mu Li, and Taku Kudo. 2006. An improved Chinese word segmentation system with conditional random field. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 162–165. Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep recurrent models with fast-forward connections for neural machine translation. Transactions of the Association for Computational Linguistics (TACL), 4:371–383. Junru Zhou and Hai Zhao. 2019. Head-Driven Phrase Structure Grammar Parsing on Penn Treebank. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL). Pengfei Zhu, Zhuosheng Zhang, Jiangtong Li, Yafang Huang, and Hai Zhao. 2018. Lingke: A fine-grained multi-turn chatbot for customer service. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), System Demonstrations, pages 108–112.
2019
154
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1595–1605 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1595 Using LSTMs to Assess the Obligatoriness of Phonological Distinctive Features for Phonotactic Learning Nicole Mirea∗and Klinton Bicknell‡, ∗ ∗Northwestern University ‡Duolingo [email protected] [email protected] Abstract To ascertain the importance of phonetic information in the form of phonological distinctive features for the purpose of segmentlevel phonotactic acquisition, we compare the performance of two recurrent neural network models of phonotactic learning: one that has access to distinctive features at the start of the learning process, and one that does not. Though the predictions of both models are significantly correlated with human judgments of non-words, the feature-naive model significantly outperforms the feature-aware one in terms of probability assigned to a held-out test set of English words, suggesting that distinctive features are not obligatory for learning phonotactic patterns at the segment level. 1 Introduction Knowing a language involves having systematic expectations about the sequential sound patterns within syllables and words in the language— a sensitivity to the phonotactic generalizations that exist in the language. This sensitivity helps language users segment a continuous stream of speech (Vitevitch et al., 1997), incorporate new words into the lexicon (Storkel et al., 2006), and reconstruct parts of an utterance that may have been obscured by noise. However, the details of how language learners infer these phonotactic generalizations from incoming acoustic data are still unclear. The current project seeks to clarify the extent to which phonetic information (at the level of phonological distinctive features) is useful for predicting upcoming phones within a word, by building computational models of phonotactic acquisition. Phonotactic patterns are typically stated in terms of generalizations over natural classes; for example, voiced stops cannot follow voiceless stops word-finally in English. These natural classes are defined by a hierarchy or set of distinctive features that is either taken to be universal across languages (Chomsky and Halle, 1965; Clements, 2009) or emergent from the process of phonological acquisition—including phonotactic acquisition—in a particular language (Mielke, 2008; Dresher, 2015). Nevertheless, most models of phonotactic acquisition require that phonological distinctive features be specified in advance of learning. Our work interrogates this assumption through the following questions: 1. Is external information regarding phonological distinctive features a necessary prerequisite for learning word-level phonotactic generalizations? 2. Must models become sensitive to phonological properties of incoming segments in order to represent phonotactic generalizations? To answer them, we use recurrent neural networks with long short-term memory (LSTM) nodes, which have shown considerable success in learning patterns at the word (Sundermeyer et al., 2015) and character (Kim et al., 2016) levels. These models encode each phonetic segment in the inventory as a vector of numbers. With exposure to more training data, these representations adapt to the task at hand: incrementally predicting each segment in a word, given all previous segments in the word. If phonetic segments must be specified in terms of distinctive features in advance of phonotactic learning, we would expect a model that encodes phonetic segments in this manner the outset of training to ultimately represent phonotactic generalizations more accurately than one that initially encodes each phonetic segment as a random vector containing no phonetic information whatsoever. If, on the other hand, all information required to 1596 learn phonotactic generalizations is already latent in the sequence of segments, then the featurallyinformed model should have no advantage. Alternatively, initializing the model with distinctive features might constrain it to explore a suboptimal area of the solution space, ultimately leading to a less accurate representation of phonotactics. To investigate our second question, we determine whether the resultant learned encodings of each phonetic segment reflect phonetic information by examining the state of the models after training. If the post-training encodings do encode phonetic information, it would support the centrality of a phonetic representation of incoming acoustic data for phonotactic learning. Following previous work by Futrell et al. (2017), in Experiment 1 we evaluate how well our models capture phonotactic generalizations by measuring the probability they assign to unseen words from a test corpus. In accordance with rational analysis (Anderson and Milson, 1989), we make claims about the mind by studying the environment in which it operates, under the assumption that the mind adapts to the environment in order to achieve its goals—here, the goal of learning what constitutes a “likely” word-form in a language.1 If the optimal way of achieving this relies on phonological distinctive features, then we should expect that language users do draw upon this resource in order to infer phonotactic regularities. To verify this expectation, in Experiment 2 we evaluate our models using the more traditional means of assessing phonotactic learners: comparison with human wordlikeness ratings of nonwords. If our models have indeed learned an ecologically valid representation of English phonotactics, we expect the probabilities that they assign to non-words to correlate with wordlikeness ratings assigned by English speakers. 2 Related Work To ask whether distinctive features are helpful for phonotactic generalization, it is first essential to establish what form these phonotactic generalizations should take. Experimental work supports a characterization of phonotactics as gradient expectations over sequences of sounds, instead of cate1This goal is subordinate to other goals: speech segmentation, word learning, perception of speech in noise, communication, survival, etc. We have chosen this as a tractable level of analysis. gorical restrictions designating certain sound sequences as marked. In the phonotactic learning experiment conducted by Goldrick (2004), participants were able to acquire feature-based phonotactic constraints of both gradient and categorical forms. Gradient phonotactic sensitivity has also been found in children’s productions (Coady and Aslin, 2004) as well as adults’ wordlikeness judgments (Frisch et al., 2000). Following this, our model will represent gradient constraints, and its task will be to assign gradient acceptability ratings to sequences of phonetic segments. Bernard (2017) demonstrated that humans are capable of simultaneously tracking and learning phonotactic generalizations defined at the level of word boundaries, syllable positions, and cooccurrences between adjacent phonetic segments. Our LSTM networks are capable of capturing all three types of constraints. Crucially, they are capable of representing dependencies between nonadjacent units in a sequence (Sundermeyer et al., 2015), which means that they can learn gradient phonotactic constraints at both the word, syllable, and segment level, without the need for explicit syllable coding in the training data. Many models have addressed the question of how phonotactic generalizations are induced from incoming data (Hayes and Wilson, 2008; Albright, 2009; Futrell et al., 2017)2. These vary in terms of the algorithm that the learner uses to learn correspondences between segments. Nevertheless, most of these models of phonotactic acquisition presuppose that incoming data is encoded in terms of a set or hierarchy of distinctive features that are predetermined by the researcher. Our research questions this fundamental assumption, with potential implications for these phonotactic learning models if the assumption is unsubstantiated. This assumption has already been challenged by a baseline from Albright (2009), which compared bigram models over distinctive features and segments. The segmental bigram model yielded slightly higher agreement with human wordlikeness judgments than the featural bigram model, although the featural bigram model was closer to human judgments for words containing unattested sequences. However, these results may change for models capable of learning generalizations across longer units of structure; this possibility warrants another test. 2See Daland et al. (2011) for a comprehensive review. 1597 Previous attempts to explicitly quantify the relevance and “psychological accuracy” of a universal, innate set of distinctive features for phonotactic learning have also produced mixed results. Mielke (2008) used a typological analysis to argue for language-specific, learned distinctive features; in 2012, Mielke devised another phonetic similarity metric that corresponds to surface phonological patterns roughly as well as distinctive features do. Drawing upon this work, Dunbar et al. (2015) compared how well featural representations derived from acoustic, articulatory, and phonotactic models capture phonemic distinctions in English. The phonotactic-derived feature representations performed markedly worse than the acoustic or articulatory representations at separating this phonemic space, suggesting a weaker-thanexpected link between phonotactics and acoustic/articulatory phonetics—and indeed, between phonotactics and the features required to distinguish phonemic space. Our work probes this link in the opposite direction, questioning the extent to which distinctive features are necessary to learn phonotactic generalizations. 3 Model3 Our models are recurrent neural networks with LSTM nodes. Each network’s task is to incrementally predict the next phonetic segment in a sequence, given the beginning of the sequence as input. Models were constructed using PyTorch 0.3.1 (Paszke et al., 2017). The function and description of each layer in the model is as follows: 3.1 Input Layer The input layer reads in each phonetic segment is a one-hot vector. The number of nodes in this layer is equal to the size of the phonetic inventory—i.e., the number of unique phones in the corpus (with vowels of different stress levels counted as separate phones). For the present data, this number is equal to 77, including start and end symbols that delimited each word in the corpus. 3.2 Embedding Layer The embedding layer projects each phonetic segment in the input into a continuous representation 3All source code for models, training/validation/test sets, result files, and analysis scripts are included as supplementary material and freely available on GitHub. that is passed along to the recurrent layers. The embedding layer has 68 nodes: twice the number of phonological features in the feature representation that we chose (described in more detail in Section 4.1). Since the input layer uses a onehot representation, this means that every phonetic segment in the inventory is represented as a vector of 68 weights between the corresponding input node and the embedding layer—i.e., an embedding. These weights were initialized according to the procedure described in Section 4.1. The activation function for nodes in this layer was linear, with a bias term of 0. 3.3 Recurrent Layers Each of the two recurrent layers of the network consisted of 512 LSTM nodes. The number of recurrent layers, as well as the number of nodes in each layer, were determined through extensive hyperparameter tuning (see Table A1 for details). Each LSTM node receives input not only from the embedding layer, but also from its previous state. This allows the network to maintain a history, keeping track of the phones in the word up to the current point. Compared to simple recurrent neural networks, LSTMs have proven better at learning longer-distance dependencies, allowing them to represent more complex dependencies across non-adjacent timesteps (Hochreiter and Schmidhuber, 1997). 3.4 Output Layer The output layer is a linear decoder layer as large as the segment inventory: 77 nodes. As in the input layer, each node corresponds to a particular phonetic segment. The output of the entire model, then, corresponds to a probability distribution over the next segment. This distribution is normalized using a softmax function, and the cross-entropy between this normalized distribution and the onehot vector of the actual next segment indexes the accuracy of the model’s prediction. 4 Experiment 1: Evaluating on a Held-Out Test Set To investigate whether pre-specified distinctive features are helpful for acquiring phonotactic generalizations, we created two versions of a phonotactic learner: one that initially represents incoming phonetic segments as distinctive feature bundles (a feature-aware condition), and one that ini1598 tially represents phonetic segments as random vectors (a feature-naive condition). Our experimental manipulation occurs in the initialization of the weights between the input layer and the embedding layer; all other parameters were held constant. To compare these, we trained them on a identical subsets of the CELEX2 corpus (Baayen et al., 1995), and evaluated the likelihood that each model assigned to a non-overlapping test subset from the same corpus. 4.1 Method Training Procedure All models had the structure described in Section 3. Before training, the value of the weights between the input and embedding layers was determined in one of two ways, depending on the experimental condition to which the network was assigned: 1. Feature-aware condition: The weight vector of each phonetic segment was determined according to its distinctive feature specification, according to the scheme described later in this section (see “Distinctive Features”). Each weight was initialized as either -1, 0, or 1, depending on the phonetic segment’s value for the feature in question. 2. Feature-naive condition: The weight vector of each phonetic segment was populated randomly from a distribution over the values -1, 0, and 1, with proportions identical to those found in the feature-aware condition. All other weights were initialized from a uniform distribution between ±h−1, where h was the number of nodes in the subsequent layer. All weights in the network were adjusted via backpropagation during the course of training. These included the weights between each layer, as well as the weights between successive states of the recurrent layers and those controlling each gate of each LSTM node. The error function used for this was cross-entropy loss, calculated over the 77 phonetic segment classes. Minimizing this crossentropy loss is equivalent to maximizing log likelihood. Each word in the training corpus was treated as a minibatch, with stored error backpropagated through the network once per word using stochastic gradient descent. Activations in each layer were automatically reset after each backpropagation to random values that were generated at the beginning of training. Through hyperparameter tuning (detailed in Table A1), we settled on 1.0 as a suitable value for the initial learning rate, and annealed this by a factor of 0.25 every time there was no improvement on the validation set. The aforementioned hyperparameter tuning also led us to employ a dropout of 0.2, adjusting only 80% of the training weights per minibatch. Each model was trained for a total of 25 epochs (complete runs through the training corpus), after which the iteration of the model that assigned the highest log likelihood to the validation corpus was evaluated on the held-out test corpus, and the phonetic segment embeddings were stored for further analysis (see Section 6). Twenty-five random initializations were trained in both the feature-aware and feature-naive conditions, for a total of 50 initializations. Within a condition, each initialization varied with respect to the initial weights except those between the input layer and the embedding layer. Data Corpus We used a randomly selected 50,000lemma subset from the English part of the phonetically-transcribed CELEX2 database (Baayen et al., 1995) to train and test our model4. 30,000 of these lemma words were used to train the model, and the remaining 20,000 were randomly divided into validation and test sets of 10,000 lemmas each. Lemmas were used instead of inflected forms in order to minimize the number of shared stems across the three sets. The only preprocessing steps applied to these data were the translation of each lemma from the DISC notation used in CELEX2 into IPA (with diphthongs split into separate phonetic segments, in order to increase comparability with Futrell et al., 2017) and the addition of start and end symbols around each word. No syllabification was added, because the models should infer the shape of syllables from the data alone, due to their ability to represent information across multiple timesteps. Distinctive Features The precise distinctive feature structure we used to initialize the phonetic segment embeddings was based on Futrell et al. (2017)’s hierarchical feature dependency graphs, 4The CELEX2 corpus was also the basis of Futrell et al. (2017)’s data set. 1599 in order to compare our model to this prior work. In these graphs, each node represents a feature, and certain features are only defined if their ancestor nodes have a certain value. For example, the “height” node is only defined if the manner of segment at hand is “vowel”; this is because the manner node is an ancestor of the height node.5 The first modification that we made to these feature dependency graphs is representing each multivalent feature as a binary one. This is because the values of several features do not lie along a straightforward unidimensional continuum. For instance, the “manner” node specifies the manner of a syllable, and has “trill” and “fricative” as two of its values. These manner classes are equivalent in terms of the size of the articulatory aperture: their ordering along a unidimensional continuum would be totally arbitrary. Instead, we split each possible value of a multivalent feature into a set of binary features, of which only one can be positive (1) at a given time; the rest must be negative (-1), if the feature is defined for the segment at hand. In translating these dependency graphs into vectors, we represent each feature as a pair of dimensions in each phonetic segment vector. The first dimension in each pair expresses the value of the node: positive (+1), negative (-1), or unset (0). The second dimension in each pair denotes whether the node is set (1) or unset (-1), allowing for privative feature representation. This auxiliary dimension may seem redundant, but we include it because it is not the case that unset feature values are truly ‘intermediate’ between positive and negative ones, as a representation without the auxiliary dimension would suggest. We also add another two pairs of dimensions to represent start and end symbols. Dependent Variable We used log likelihood on the held-out test corpus of 10,000 lemmas to evaluate the quality of our models’ phonotactic generalizations. The more accurate a model’s representation of English phonotactics is, the higher the likelihood it should assign to extant English words that it has not seen. 4.2 Results Performance of each model is plotted in Fig. 1. Using a Wilcoxon rank sum test with a continu5These features are detailed further in Graff (2012), though some have been omitted since they are not distinctive in English. -22 -20 -18 Feature-aware Feature-naive Condition Average natural log likelihood over all words in set Word set: test (10000 words) Average log likelihoods per model Figure 1: Box-plot of log likelihoods per model in each experimental condition. Each observation used to generate this plot (N = 50) is the average log likelihood assigned to each word in the test set, for a single model. ity correction, we find that models in the featurenaive condition assigns a significantly higher log likelihood to the test corpus than those in the feature-aware condition (W = 2.43 × 1010; p < .001). On average, the feature-aware models assigned a log likelihood of −20.98 to the words in the test set, and the feature-naive models assigned an average log likelihood of −20.07. In other words, the feature-naive models assigned over twice the probability mass to the test set compared to the feature-aware models, in terms of raw (non-log) probability. The poorer performance of the feature-aware condition suggests that distinctive features need not be specified a priori of training, and that in fact they may bias the model toward suboptimal solutions. 5 Experiment 2: Comparison to Human Judgments In an effort to validate our models externally against evaluations that humans make, we ran another experiment correlating our models’ loglikelihood ratings of non-words to human wordlikeness judgments of the same non-words. 5.1 Method Stimuli Non-words were designed by Daland et al. (2011) to vary in the level of sonority sequencing principle violation, and as such their form was quite constrained: 96 stress-initial CCVCVC non-words, each starting with a consonant cluster that was either unattested (18 clusters), marginally attested (12 clusters), or frequently attested (18 clusters) as an onset in English. No non-word had more than one lexical neighbor, and non-words whose first 1600 or last 4 segments formed a existing word were excluded.6 Procedure All human data for this experiment was collected by Daland et al. (2011). Forty-eight participants were recruited through Amazon Mechanical Turk; results were only retained from those reporting high (N = 2) or native (N = 36) English proficiency. Each participant performed a Likert wordlikeness rating task (1–6, where 6 was more wordlike) on all 96 stimuli, followed by a head-tohead comparison rating task in which participants were given two words and instructed to choose the non-word that seemed more like a typical English word. Each of the 4560 possible pairs was assigned to a single participant, and no participant saw any non-word more than twice during this task. Daland et al. (2011) found that the comparison average of each non-word (proportion of comparison trials in which it was selected as better than its competitor) correlated with its average Likert rating across participants. However, the comparison average was more sensitive in differentiating non-words at the bottom of the Likert scale; therefore, we used the comparison average to evaluate our models. Our models were the same feature-aware and feature-naive models from Experiment 1, trained on the same data. After training, we calculated the log-likelihood of each of the 96 non-word stimuli from Daland et al. (2011) for each of the 50 models from Experiment 1, and correlated these log-likelihoods to the human-derived comparison averages via the Spearman method. 5.2 Results The correlations between the models’ loglikelihood ratings and the human-derived comparison averages were moderate-to-strong, with Spearman’s ρ ranging from 0.50 to 0.79, which is in the range of the best-performing models from Daland et al. (2011) that were trained on a comparable, but smaller, amount of unsyllabified data (20,000 vs 30,000 words). However, a Wilcoxon rank sum test on ρ yielded no significant difference between feature-naive and feature-aware models in this regard (W = 282; p = 0.56). This indicates that, although both feature-aware 6A full list of these words, as well as their wordlikeness scores, is downloadable from the first author’s website. and feature-naive models can predict human judgments of non-words, the log-likelihoods assigned to this particular set of non-words do not distinguish the feature-aware from the feature-naive models. 6 Clustering of Learned Phone Embeddings To examine the representations that are most helpful for characterizing word-level phonotactic generalizations, we performed a qualitative cluster analysis of the phonetic segment embeddings learned by the randomly-initialized model within each condition that assigned the highest average log likelihood to the test corpus. First, we used agglomerative nesting to cluster the learned phonetic segment embeddings, which were grouped according to the Euclidean distance between them7. Position of each group was calculated in the 68-dimensional space via the unweighted pair-group average method (Sokal and Michener, 1958). The results are depicted in Fig. 2 for the feature-aware model and Fig. 3 for the feature-naive model. Comparing them, we see that the feature-aware model maintains manner-based distinctions even at late stages of the clustering. In contrast, these distinctions as not as clearly depicted in the feature-naive model, but it appears this model still encodes some phonetic information; namely: all stops are incorporated into the structure early, most vowels are incorporated into the structure after non-vowels, and several clusters contain only vowels of the same quality, collapsing over stress. As clustering based on Euclidean distances is only a simplification over the non-linear transformations the network performs, this is a lower bound on the amount of structure the network can find. The feature-naive models’ better performance on the test set suggests that these models may be encoding phonotactic-relevant knowledge in a more distributed representation that cannot be visualized thus—for example, a representation across several layers. The phonetic information encoded by the models may be reflected in the heat map of feature embeddings, plotted in Figs. 4 and 5. To generate these, agglomerative clustering was performed in two dimensions: both on the phonetic seg7Clustering along Manhattan distance, as recommended by Aggarwal et al. (2001), yielded similar results. 1601 ʊ ɪ ɹ ʃ ɑ ʌ ɑː ɒː ɜː ɪ1 ɑ1 ʌ1 ʊ1 ɑ1ː ɒ1ː ɜ1ː ɪ2 ɑ2 ʌ2 ʊ2 ɑ2ː ɒ2ː ɜ2ː aa1 a2 æ æ1 æ1ː æ2 b d ð d.ʒ e ə ɛ e1 ɛ1 e2 ɛ2 f g h iː i1ː i2ː j k l m n ŋ o ɔ ɔː o1 ɔ1 ɔ1ː o2 ɔ2 ɔ2ː p s </s> <s> t t.ʃ uː u1ː u2ː v w x z ʒ θ Manner class a a a a a approximant nasal obs start/end vowel Figure 2: Dendrogram created using agglomerative clustering on trained embeddings from the featureaware model that achieved the highest log likelihood on the test corpus. <s> and </s> signify start- and end-of-word symbols, respectively, and numbers after vowels indicate primary (1) and secondary (2) stress. <s> nʊɪltm səpkbdfgvɹzʃŋd.ʒ u1ː i1ː ɪ1 ɪ2 wuː a1 at.ʃ ɛ1 ɛ2 jæ æ2 æ1 ɑɑ1 ɑ2 ɛi2ː ɑ1ː ɔ1ː xʌ1 ʌ2 ʌɑː iːɒː e</s> ɒ1ː θðe1 ʒɑ2ː ɔ2ː ɒ2ː æ1ː hʊ1 ʊ2 o1 oo2 ɔ2 ɜ2ː ɔː ɜː ɜ1ː u2ː a2 e2 ɔɔ1 Manner class a a a a a approximant nasal obs start/end vowel Figure 3: Dendrogram created using agglomerative clustering on trained embeddings from the featurenaive model that achieved the highest log likelihood on the test corpus. 1602 </s> <s>ŋpskəɪlnʊt mbfɹdʃ u1ːgvzθ d͡ʒt͡ʃxʒðhjɛ ɛ2 ɛ1 ɪ2 ɪ1 æ1 ɑ1ʌ ʌ2 ʌ1ɑ æuː u2ː i2ːiː i1ːɔː ɔ2ː ɔ1ː æ2 ɑ2 ɜ2ːɜːɒː ɒ2ː ɒ1ː ɑ2ː æ1ːɑː ɑ1ː ʊ2 ʊ1 ɜ1ːa a2 a1e e1 e2 ɔ2ɔ ɔ1o o2 o1w 17 12 20 33 66 3634 710 31 27 930 28 32 26 23 24 22 29 25 462 65 64 59 58 63 57 61 60 56 813 18 11 15 21 19 16 246 550 54 45 38 53 49 55 48 51 47 52 68 67 36 42 35 40 43 41 39 44 37 114 Phonetic Segment Embedding Dimension -2 0 2 Value 0 200 400 Color Key and Histogram Count Figure 4: Heatmap of trained embeddings created from the feature-aware model that achieved the best performance on the test set. Clusterings along top axis are based on trained embeddings of each segment. ments and on the embedding dimensions. Colored patches of activations in these heat maps correspond to clusters of dimensions that all activate in response to certain phonetic segments—that is, clusters of dimensions that define a feature. Especially informative are patches with the same activation value below a cluster of phones: this means that the cluster is based on the feature encoded by those dimensions. For example, the two most well-defined final clusters formed by the featureaware model are supported by multiple features, and Fig. 4 reflects this through wide horizontal bands that span the length of those clusters. Here, the vertical width of each band indexes the number of features that define the cluster. The picture is much less clear for Fig. 5, which represents the embeddings learned in the featurenaive condition. The noisiness of the heat map indicates the clusters are not as distinct from each other: though every cluster is defined by at least one embedding dimension, these dimensions do not correlate in terms of their response to other phones outside the cluster. Instead of creating a straightforward clustering along embedding dimensions, the feature-naive model encodes any information that may be relevant to phonotactic probability in a more distributed representation. a a1 ɪ1 ɪ2 i1ː u1ː d͡ʒŋʃzɹvgfdbkp mtlɪʊnsəwuːt͡ʃ ɛ2 ɛ1j æ æ1 æ2ɑ ɑ2 ɑ1ɛ i2ː ɑ1ː ɔ1ːxʌ ʌ2 ʌ1iːɒː ɑːe </s> ɒ1ː <s>θðʒ e1 ɑ2ː ɔ2ː ɒ2ː æ1ːh ʊ2 ʊ1 ɔ2 o2 o1o ɜ2ːɜːɔː ɜ1ː u2ː a2 e2ɔ ɔ1 49 61 31 46 58 35 532 54 48 16 744 17 625 47 968 15 41 42 63 20 51 22 34 39 10 62 33 21 27 45 443 55 53 29 18 24 64 13 50 12 59 66 40 60 28 13219 37 56 865 30 11 14 67 23 52 57 36 38 26 Phonetic Segment Embedding Dimension -2 -1 0 1 2 Value 0 100 200 300 Color Key and Histogram Count Figure 5: Heatmap of trained embeddings created from the feature-naive model that achieved the best performance on the test set. 7 Discussion Returning to our initial questions, it seems prespecified phonological distinctive features are not required for phonotactic learning. All else being equal, representing phonetic segments as bundles of phonological distinctive features does not appear to aid in forming segment-level phonotactic generalizations, and, for this class of learning model, this specific distinctive feature set may even be detrimental. The fact that the featurenaive condition was able to encode phonotactic patterns indicates that all data required to represent these patterns as probabilities between phonetic segments is present in the sequence of segments itself; the learner need not rely on external information, such as distance between phones in acoustic space. This is not to say that phonetic information is irrelevant to phonotactic learning. From examining the encodings that are learned during this process, we observe that the best models do encode some phonetic data. This work is an example of how the initialization of even a single layer of a deep learning model can affect its ultimate performance on a held-out test set, a fact already demonstrated and discussed by, for instance, Sutskever et al. (2013). This effect was not observed in the models’ correlations with human judgments, but this may be due to the limited number and form of non-words 1603 tested; with more statistical power, this measure may gain enough precision to distinguish the two conditions8. Finally, most of our models do assign a higher log likelihood to the test corpus than Futrell et al. (2017), which achieved a log likelihood of −21.73, suggesting that neural networks may be just as good at capturing phonotactic regularities as models that generate upcoming phonetic segments via stochastic memoization. However, our initial training set was much larger; when trained on only the 2,500 lemmas in Futrell et al.’s training set, our models yielded slightly lower log likelihoods than theirs (though we could not compare directly because their test set was inaccessible). 8 Implications Per our results, phonological distinctive features do not appear to be mandatory for phonotactic acquisition. At the segment level, phonotactic patterns are learnable from distributional characteristics of each segment alone. This signals a need for revision of segmental phonotactic learning models that rely on a set of predetermined distinctive features—or at least stronger justification for the inclusion of any proposed distinctive feature set over another. There are still a few additional tests that must be done before our conclusions can be generalized beyond these experiments. First, although the feature set that we used is typical of those used by other models of phonotactics, it is still possible that some other phonological feature set would result in better performance. Second, distinctive features may yet be helpful for models that train on much smaller datasets than ours, since they can provide hints to phonological structure that are not inferrable from such limited data. Beyond distinctive features, some other, more detailed phonetic representation may yet prove helpful for phonotactic acquisition, if phonotactic expectations actually contain more detail about token-level variability, instead of the discrete segment-level representation assumed herein. Precise consequences for extant phonotactic learning models will depend whether this is the case; the determination is complicated by the fact that humans acquire both phonetic categories and phono8This homogeneity may not have been an issue for Daland et al. (2011)’s comparison because the models tested therein had very diverse structures, and may have become sensitive to very different aspects of the training data as a result. tactic patterns simultaneously (Jusczyk et al., 1994; Werker and Tees, 1984). One interesting avenue for future research is the multi-language case—i.e., training the model on a corpus in one language, and analyzing its performance on a corpus in a different language. This can help us make predictions about the types of pronunciation difficulties that speakers are likely to encounter in a second language, illuminating phonological effects of cross-linguistic transfer. We must nonetheless be wary of using these results to make claims about human language acquisition. Human language is shaped by many other factors that are extraneous to our models, including articulatory restrictions, perceptual limitations, and constraints of cognitive economy. At the risk of overreaching, we must better specify these factors and their consequences before drawing further analogies. 9 Conclusion Phonotactic acquisition can be accomplished without external, prior knowledge of distinctive features; indeed, according to our results, this knowledge may be a slight hindrance rather than a help. Though segment-level phonotactic inference may still benefit from access to a finer-grained phonetic specification of the speech stream, a predetermined encoding of this input in terms of distinctive features does not appear to be required for this purpose. Acknowledgements We thank Matthew Goldrick, members of the Northwestern Language & Computation Lab and Northwestern SoundLab, as well as the audience at MidPhon 2018 and anonymous reviewers for their helpful feedback. We also wish to thank the developers of the PyTorch Word-Level language modeling RNN example, which served as a starting point for our RNN code. This research was supported in part by NSF GRFP Award DGE1842165 (Mirea) and NSF 1734217 (Bicknell). References Charu C. Aggarwal, Alexander Hinneburg, and Daniel A. Keim. 2001. On the Surprising Behavior of Distance Metrics in High Dimensional Space. In Gerhard Goos, Juris Hartmanis, Jan van Leeuwen, Jan Van den Bussche, and Victor Vianu, editors, Database Theory — ICDT 2001, volume 1604 1973, pages 420–434. Springer Berlin Heidelberg, Berlin, Heidelberg. Adam Albright. 2009. Feature-based generalisation as a source of gradient acceptability. Phonology, 26(1):9–41. John R. Anderson and Robert Milson. 1989. Human memory: An adaptive perspective. Psychological Review, 96(4):703–719. R H Baayen, R Piepenbrock, and L Gulikers. 1995. CELEX2. Am´elie Bernard. 2017. Novel phonotactic learning: Tracking syllable-position and co-occurrence constraints. Journal of Memory and Language, 96:138– 154. Noam Chomsky and Morris Halle. 1965. Some controversial questions in phonological theory. Journal of Linguistics, 1(2):97–138. G. Nick Clements. 2009. The Role of Features in Phonological Inventories. In Eric Raimy and Charles E. Cairns, editors, Contemporary Views on Architecture and Representations in Phonology, pages 19–68. MIT Press. Jeffry A. Coady and Richard N. Aslin. 2004. Young children’s sensitivity to probabilistic phonotactics in the developing lexicon. Journal of Experimental Child Psychology, 89(3):183–213. Robert Daland, Bruce Hayes, James White, Marc Garellek, Andrea Davis, and Ingrid Norrmann. 2011. Explaining sonority projection effects. Phonology, 28:197–234. B. Elan Dresher. 2015. The arch not the stones: Universal feature theory without universal features. Nordlyd, 41(2):165–181. Ewan Dunbar, Gabriel Synnaeve, and Emmanuel Dupoux. 2015. Quantitative Methods for Comparing Featural Representations. In Proceedings of the International Congress of Phonetic Sciences. Stefan A. Frisch, Nathan R. Large, and David B. Pisoni. 2000. Perception of Wordlikeness: Effects of Segment Probability and Length on the Processing of Nonwords. Journal of memory and language, 42(4):481–496. Richard Futrell, Adam Albright, Peter Graff, and Timothy J. O’Donnell. 2017. A Generative Model of Phonotactics. Transactions of the Association for Computational Linguistics, 5(0):73–86. Matthew Goldrick. 2004. Phonological features and phonotactic constraints in speech production. Journal of Memory and Language, 51(4):586–603. Peter Nepomuk Herwig Maria Graff. 2012. Communicative Efficiency in the Lexicon. Thesis, Massachusetts Institute of Technology. Bruce Hayes and Colin Wilson. 2008. A Maximum Entropy Model of Phonotactics and Phonotactic Learning. Linguistic Inquiry, 39(3):379–440. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation, 9(8):1735–1780. Peter W. Jusczyk, Paul A. Luce, and Jan Charles-Luce. 1994. Infants′ Sensitivity to Phonotactic Patterns in the Native Language. Journal of Memory and Language, 33(5):630–645. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2016. Character-Aware Neural Language Models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI16), pages 2741–2749. Association for the Advancement of Artificial Intelligence. Jeff Mielke. 2008. The Emergence of Distinctive Features. Oxford Studies in Typology and Linguistic Theory. Oxford University Press, Oxford, New York. Jeff Mielke. 2012. A phonetically based metric of sound similarity. Lingua, 122(2):145–163. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. Robert R. Sokal and Charles D. Michener. 1958. A statistical method for evaluating systematic relationships. University of Kansas Science Bulletin, 38(1):1409–1438. Holly L Storkel, J Armbr¨uster, and Hogan, T P. 2006. Differentiating phonotactic probability and neighborhood density in adult word learning. Journal of Speech, Language & Hearing Research, 49(6):1175–1192. Martin Sundermeyer, Hermann Ney, and Ralf Schl¨uter. 2015. From Feedforward to Recurrent LSTM Neural Networks for Language Modeling. IEEE/ACM Trans. Audio, Speech and Lang. Proc., 23(3):517– 529. Ilya Sutskever, James Martens, and George Dahl. 2013. On the importance of initialization and momentum in deep learning. In Proceedings of Machine Learning Research, volume 28, page 9, Atlanta, Georgia, USA. Michael S. Vitevitch, Paul A. Luce, Jan Charles-Luce, and David Kemmerer. 1997. Phonotactics and Syllable Stress: Implications for the Processing of Spoken Nonsense Words. Language and Speech, 40(1):47–62. Janet F. Werker and Richard C. Tees. 1984. Crosslanguage speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behavior and Development, 7(1):49–63. 1605 A Appendix Hyperparameter Name Description Values Tested rand reset whether activations in the model reset to a random state (True), or to zero (False) after each word True, False lr initial learning rate 0.1, 0.5, 1.0, 2.0 anneal factor amount by which to anneal learning rate, if no improvement found 0, 0.25, 0.5, 1.0 patience number of training epochs to wait for validation loss to improve before updating weights 0, 2, 4 dropout proportion of weights to keep fixed 0, 0.2, 0.5 epochs number of epochs (complete passes through the data) to train for 25, 50, 100 nlayers number of recurrent layers 1, 2, 4 nhid number of nodes in each recurrent layer 128, 256, 512, 1250 Table A1: Particulars of hyperparameter testing. Hyperparameters were optimized for speed and likelihood assigned to the validation set. Optimal parameters for the validation set are bolded, and were used in the experiments reported here.
2019
155
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1606–1613 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1606 Better Character Language Modeling Through Morphology Terra Blevins1 and Luke Zettlemoyer1,2 1Paul G. Allen School of Computer Science & Engineering, University of Washington 2Facebook AI Research {blvns, lsz}@cs.washington.edu Abstract We incorporate morphological supervision into character language models (CLMs) via multitasking and show that this addition improves bits-per-character (BPC) performance across 24 languages, even when the morphology data and language modeling data are disjoint. Analyzing the CLMs shows that inflected words benefit more from explicitly modeling morphology than uninflected words, and that morphological supervision improves performance even as the amount of language modeling data grows. We then transfer morphological supervision across languages to improve language modeling performance in the low-resource setting. 1 Introduction Character language models (CLMs) are distributions over sequences of characters (Sutskever et al., 2011), in contrast to traditional language models which are distributions over sequences of words. CLMs eliminate the need for a fixed word vocabulary, and modeling text at the character level gives the CLM access to subword information. These attributes suggest that CLMs can model regularities that exist within words, such as morphological inflection. However, even large language modeling (LM) corpora have sparse coverage of inflected forms for morphologically-rich languages, which has been shown to make word and character language modeling more difficult (Gerz et al., 2018b; Cotterell et al., 2018). Because to this, we hypothesize that accurately modeling morphology improves language modeling, but that it is difficult for CLMs to learn this from text alone. Motivated by this hypothesis, we add morphology supervision to character language modeling and show that, across two benchmark datasets, multitasking morphology with CLMs improves bits-per-character (BPC) performance on twentyfour languages, even when the annotated morphology features and language modeling data do not overlap. We also show that models augmented by multitasking achieve better BPC improvements on inflected forms than on uninflected forms, and that increasing the amount of language modeling data does not diminish the gains from morphology. Furthermore, to augment morphology annotations in low-resource languages, we also transfer morphology information between pairs of highand low-resource languages. In this cross-lingual setting, we see that morphology supervision from the high-resource language improves BPC performance on the low-resource language over both the low-resource multitask model and over adding language modeling data from the high-resource language alone. 2 Approach Language Modeling Given a sequence of characters c = c1, c2, ..., cn, our character-level language models calculate the probability of c as p(c) = |c| Y i=1 p(ci|c1, c2, ..., ci−1) (1) Each distribution is an LSTM (Hochreiter and Schmidhuber, 1997) trained such that at each time step t, the model takes in a character ct and estimates the probability of the next character ct+1 as p(ct+1|c≤t) = g(LSTM(wt, ht−1)) (2) where ht−1 is the previous hidden state of the LSTM, wt is the character embedding learned by the model for ct, and g is a softmax over the character vocabulary space. We calculate the loss function of our language model LLM as the negative log-likelihood of the 1607 CS DE EN ES FI FR RU dev test dev test dev test dev test dev test dev test dev test HCLM 2.010 1.984 1.605 1.588 1.591 1.538 1.548 1.498 1.754 1.711 1.499 1.467 1.777 1.761 LM 2.013 1.972 1.557 1.538 1.543 1.488 1.571 1.505 1.725 1.699 1.357 1.305 1.745 1.724 MTL 1.938 1.900 1.249 1.241 1.313 1.256 1.260 1.196 1.698 1.669 1.211 1.167 1.645 1.619 ∆ 0.075 0.072 0.308 0.297 0.230 0.232 0.311 0.309 0.027 0.030 0.146 0.138 0.100 0.105 Table 1: Results on Multilingual Wikipedia Corpus (MWC) in bits per character (BPC). ∆calculated improvement in BPC from the baseline LM to MTL. HCLM is the best model from Kawakami et al. (2017). model on the character sequence c: LLM(c) = NLL(c) = − |c| X i=1 log p(ci|c<i) (3) We then evaluate the trained model’s performance with bits-per-character (BPC): BPC(c) = −1 |c| |c| X i=1 log p(ci|c<i) (4) Multitask Learning To add morphology features as supervision, we use a multitask learning (MTL) objective (Collobert and Weston, 2008) that combines loss functions for predicting different morphological tags with the language modeling objective. Since morphological features are annotated at the word-level, we convert these annotations to the character level by placing each annotated word’s tags as supervision on the first character (which we found to outperform supervising the last character in preliminary results). This early placement allows the model to have access to the morphological features while decoding the rest of the characters in the word. Therefore, our morphology data m = m1, m2, ..., mn is a sequence of labeled pairs in the form mi = (x, y) where x is a character and y is a set of morphology tags for that character. For example, “cats ran” would be given to our model as the sequence (‘c’, Number=Pl), (‘a’, -), (‘t’, -), (‘s’, -), (‘ ’, -), (‘r’, Tense=Past), (‘a’, -), (‘n’, -). We modify the model’s loss function to L(c, m) = LLM(c) + δ n X i=1 Li(m) (5) where n is the number of morphological features we have annotated in a language, δ is a weighting parameter between the primary and auxiliary losses, LLM is the original language modeling loss, and Li are the additional losses for each morphological feature (e.g., tense, number, etc). Because we include a separate loss for each morphological feature, each feature is predicted independently. 3 Experimental Setup Datasets We obtain morphological annotations for 24 languages (Table 2) from Universal Dependencies (UD; v.2.3), which consists of dependency parsing treebanks with morphology annotations on a large number of languages (Nivre et al., 2018). These languages were chosen based on the size of their treebanks (to ensure a sufficient amount of morphology annotations); we also exclude languages that do not have morphology features annotated in the treebank. For language modeling supervision, we train two sets of models. One set is trained with the text from the UD treebanks; the other set of models is trained on the Multilingual Wikipedia Corpus (MWC) (Kawakami et al., 2017). This language modeling dataset consists of Wikipedia data across seven languages (Czech, German, English, Spanish, Finnish, French, and Russian). Model architecture Our models each consist of a stacked LSTM with 1024 hidden dimensions and a character embedding layer of 512 dimensions. We include two hidden layers in the language models trained on UD, and three hidden layers in those trained on MWC. The parameters that integrate multitasking into the model (the layer at which we multitask morphology and the weighting we give the morphology losses, δ) are tuned individually for each language. Further hyperparameter and training details are given in the supplement. 4 Language Modeling Results Distant MTL We first train CLMs where the language modeling data (from MWC) and morphology data (from UD) do not overlap (Table 1).1 1Since both of these datasets draw from Wikipedia, we verified that no sentences overlap between the MWC test set 1608 (a) (b) (c) Figure 1: Improvement of MTL over the LM baseline (a) over the inflection rate of each UD language, (b) over the quantity of training data for each UD language, and (c) for inflected and uninflected words in the UD dev set. Lang ISO %Infl LM MTL ∆ Bulgarian BG 39% 1.890 1.887 0.003 Catalan CA 31% 1.653 1.599 0.054 Czech CS 43% 2.045 1.832 0.213 Danish DA 30% 2.152 2.135 0.017 German DE 33% 1.917 1.881 0.036 English EN 15% 2.183 2.173 0.010 Spanish ES 28% 1.801 1.763 0.038 Farsi FA 27% 2.213 2.205 0.008 French FR 32% 1.751 1.712 0.039 Hindi HI 28% 1.819 1.773 0.046 Croatian HR 49% 1.866 1.841 0.025 Italian IT 36% 1.595 1.554 0.041 Latvian LV 47% 2.243 2.217 0.026 Dutch NL 19% 1.989 1.972 0.017 Polish PL 42% 2.218 2.154 0.064 Portuguese PT 31% 1.787 1.785 0.002 Romanian RO 42% 1.840 1.798 0.042 Russian RU 42% 1.993 1.824 0.169 Slovak SK 45% 2.705 2.686 0.019 Ukranian UK 40% 2.359 2.338 0.021 Estonian ET 49% 2.089 1.993 0.096 Finnish FI 55% 1.981 1.921 0.060 Arabic AR 86% 1.724 1.708 0.016 Hebrew HE 42% 2.293 2.282 0.011 Table 2: BPC results on the Universal Dependencies (UD) test set. %Inflis the inflection rate in each language. Languages are grouped by fusional, agglutinative, and introflexive typologies, respectively. In this setting, we only train on the morphology features from UD and do not include this data as additional language modeling supervision. These models are trained on alternating batches from the two disjoint datasets. LM is a language modeling baseline with no multitask objective; MTL adds morphology supervision. We find that for all seven languages, the MTL model outperforms our baseline trained only on MWC. Our model also outperforms the strongest model from Kawakami et al. (2017), HCLMcache, which is a hierarchical language model and the UD treebanks for each of the seven languages. with caching. Thus, adding morphology supervision to our character language models allows us to achieve lower BPCs than a more complicated LM architecture. Surprisingly, we see a larger gain on languages with more LM data (EN, DE, ES, FR) than those with less data (but are considered to be more morphologically rich, e.g., CS, DE, and RU); we explore this phenomenon more in Section 5. Fully Supervised MTL We then train CLMs using UD for both langauge modeling and morphology supervision on more languages (Table 2). We again find that adding morphology supervision improves BPC. In general, we see smaller improvements between the LM and MTL models than under distant supervision, even though the UD LM data is fully annotated with morphology tags; this is likely due to the smaller training sets in UD (on average) than in MWC. On languages where the size of the two datasets are comparable, such as Russian and Czech, we see larger improvements on the fully supervised models than we do in the distant LM setting. To investigate these results, we compare the rate of inflected words on the development set (which we use as a rough measure of morphological complexity of the language) in a language against BPC improvement by MTL model (Fig. 1(a)). The rate at which each language is inflected is given in Table 2. We unexpectedly find that how much a language benefits from morphology supervision is only weakly correlated with the inflection rate of the language (r=0.15). This is surprising, because one would expect that additional morphological supervision would help languages that encode more morphological features in the forms (i.e., with higher inflection rates). We then examine the effect of training dataset 1609 (a) % Train # Chars LM MTL ∆ CS 5% 0.31M 2.829 2.793 0.036 10% 0.61M 2.625 2.581 0.044 25% 1.5M 2.379 2.303 0.076 50% 3.1M 2.191 2.120 0.071 100% 6.1M 2.013 1.938 0.075 MWC-LG 10.2M 1.835 1.729 0.106 RU 5% 0.47M 2.492 2.486 0.006 10% 0.93M 2.305 2.283 0.022 25% 2.3M 2.066 2.033 0.033 50% 4.7M 1.935 1.898 0.037 100% 9.3M 1.745 1.645 0.100 MWC-LG 18.2M 1.554 1.377 0.177 (b) % Train # Chars BPC CS 0% (LM) 2.013 5% 0.34M 2.040 10% 0.69M 2.019 25% 1.7M 2.000 50% 3.4M 1.984 100% 6.9M 1.938 RU 0% (LM) 1.745 5% 0.27M 1.758 10% 0.53M 1.761 25% 1.3M 1.673 50% 2.7M 1.700 100% 5.3M 1.645 (c) LM data Morph. data BPC SK None 2.806 SK 2.779 CS 2.752 CS+SK 2.777 CS+SK None 2.668 CS+SK 2.446 UK None 2.369 UK 2.348 RU 2.348 RU+UK 2.351 RU+UK None 2.495 RU+UK 2.316 Table 3: (a) BPC on MWC development set with varied amounts of LM training data from MWC. The last line is from training on MWC-large dataset, (b) BPC on MWC development set with varied amounts of supervised morphology data from UD train set (compared against the baseline LM), and (c) Cross-lingual transfer on UD, evaluated on low-resource language’s development set: from Czech (CS; 6.9M characters in training set) to Slovak (SK; 0.4M) and from Russian (RU; 5.3M) to Ukrainian (UK; 0.5M) size on BPC improvement between the LM and the multitasked model (Fig. 1(b)). We find that more training data (which adds both morphological and LM supervision) is strongly correlated with larger gains over the baseline LM (r=0.93). Therefore, it seems that any potential correlation between morphological complexity and the benefit of multitasking morphology is overwhelmed by differences in dataset size. 5 Analysis Experiments Modeling Inflected Words We hypothesized that morphology supervision would be most beneficial to words whose form is dependent on their morphology, e.g. inflected words. To investigate this, we calculate BPC of our UD models on inflected and uninflected forms in the UD development set. We determine whether or not a word is inflected by comparing it to the (annotated) lemma given in the UD treebank. We find that on 16 of the 24 languages for which we train models on UD, the MTL model improves more on inflected words than uninflected words, and that the average delta between LM and MTL models is 31% greater for inflected words than uninflected. A comparison of the improvements in six of these languages are given in Fig. 1(c). We show results for the agglutinative (ET, FI) and introflexive (AR, HE) languages and pick two fusional languages (EN, RU) against which to compare. Effect of Training Data One caveat to the observed gain from morphology is that the CLMs may capture this information if given more language modeling data, which is much cheaper to obtain than morphology annotations. To test this, we train CLMs on Czech (CS) and Russian (RU) on varied amounts of language modeling data from the MWC corpus (Table 2(a)). We find that for both RU and CS, increasing the amount of LM data does not eliminate the gains we see from multitasking with morphology. Instead, we see that increasing LM data leads to larger improvements in the MTL model. Even when we train the CLMs on twice as much LM data (obtained from a larger version of the MWC dataset, MWC-large), we continue to see large improvements via multitasking. We then investigate how the amount of annotated morphology data affects performance on these models (Table 2(b)). We find that, as expected, increasing the amount of morphological data the language model is trained on improves BPC performance. For both Czech and Russian, the MTL models mulitasked with 25% or more of the annotated data still outperform the LM baseline, but MTL models trained on smaller subsets of the morphology data performed worse than the baseline. This is in line with our findings in Section 4 that the amount of annotated morphology data is closely tied with how much multitasking helps. Cross-lingual Transfer In the previous section, we showed that the amount of training data (both for LM and for morphology) the CLM sees is crucial for better performance. Motivated by this, we extend our models to the cross-lingual setting, in which we use data from high-resource languages to improve performance on closely-related, lowresource ones. We train models on the (high, 1610 low) language pairs of (Russian, Ukrainian) and (Czech, Slovak) and transfer both LM and morphological supervision (Table 2(c)). We find the best performance for each low-resource language is achieved by using both the high-resource LM data and morphology annotations to augment the low-resource data. In Slovak (SK), this gives us a 0.333 BPC improvement over the MTL model on SK data alone, and in Ukranian (UK), we see a improvement of 0.032 in this setting over the MTL trained only on UK. 6 Related Work Prior work has investigated to what degree neural models capture morphology when trained on language modeling (Vania and Lopez, 2017) and on machine translation (Belinkov et al., 2017; Bisazza and Tump, 2018). Other work has looked into how the architecture of language models can be improved for morphologically-rich languages (Gerz et al., 2018a). In particular, both Kawakami et al. (2017) and Mielke and Eisner (2019) proposed hybrid open-vocabulary LM architectures to deal with rare words in morphologically-rich languages on MWC.2 Another line of work has investigated the use of morphology to improve models trained on other NLP tasks. These approaches add morphology as an input to the model, either with gold labels on the LM dataset (Vania and Lopez, 2017) or by labeling the data with a pretrained morphological tagger (Botha and Blunsom, 2014; Matthews et al., 2018). This approach to adding morphology as input features to models has also been applied to dependency parsers (Vania et al., 2018) and semantic role labeling models (S¸ahin and Steedman, 2018). Unlike these approaches, however, our technique does not require the morphology data to overlap with the training data of the primary task or depend on automatically labeled features. More similarly to our work, Dalvi et al. (2017) find that incorporating morphological supervision into the decoder of an NMT system via multitasking improves performance by up to 0.58 BLEU points over the baseline for English-German, EnglishCzech, and German-English. 2Results comparing against Mielke and Eisner (2019) are given in the supplement, due to a different character vocabulary from Kawakami et al. (2017). 7 Conclusion We incorporate morphological supervision into character language models via multitask learning and find that this addition improves BPC on 24 languages. Furthermore, we observe this gain even when the morphological annotations and language modeling data are disjoint, providing a simple way to improve language modelsing without requiring additional annotation efforts. Our analysis finds that the addition of morphology benefits inflected forms more than uninflected forms and that training our CLMs on additional language modeling data does not diminish these gains in BPC. Finally, we show that these gains can also be projected across closely related languages by sharing morphological annotations. We conclude that this multitasking approach helps the CLMs capture morphology better than the LM objective alone. Acknowledgements This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE1762114. We thank Victor Zhong, Sewon Min, and the anonymous reviewers for their helpful comments. References Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 861– 872. Arianna Bisazza and Clara Tump. 2018. The lazy encoder: A fine-grained analysis of the role of morphology in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2871–2876. Jan Botha and Phil Blunsom. 2014. Compositional morphology for word representations and language modelling. In International Conference on Machine Learning, pages 1899–1907. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, pages 160–167. ACM. Ryan Cotterell, Sebastian J. Mielke, Jason Eisner, and Brian Roark. 2018. Are all languages equally hard 1611 to language-model? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 536–541. Association for Computational Linguistics. Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, and Stephan Vogel. 2017. Understanding and improving morphological learning in the neural machine translation decoder. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 142–151. Daniela Gerz, Ivan Vuli´c, Edoardo Ponti, Jason Naradowsky, Roi Reichart, and Anna Korhonen. 2018a. Language modeling for morphologically rich languages: Character-aware modeling for word-level prediction. Transactions of the Association of Computational Linguistics, 6:451–465. Daniela Gerz, Ivan Vuli´c, Edoardo Maria Ponti, Roi Reichart, and Anna Korhonen. 2018b. On the relation between linguistic typology and (limitations of) multilingual language modeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 316–327. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Kazuya Kawakami, Chris Dyer, and Phil Blunsom. 2017. Learning to create and reuse words in openvocabulary neural language modeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of 3rd International Conference of Learning Representations. Austin Matthews, Graham Neubig, and Chris Dyer. 2018. Using morphological knowledge in openvocabulary neural language models. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1435–1445. Sebastian J Mielke and Jason Eisner. 2019. Spell once, summon anywhere: A two-level open-vocabulary language model. In Proceedings of the Thirty-Third AAAI Conference on Artifical Intelligence. Joakim Nivre et al. 2018. Universal dependencies 2.3. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University. G¨ozde G¨ul S¸ahin and Mark Steedman. 2018. Character-level models versus morphology in semantic role labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Ilya Sutskever, James Martens, and Geoffrey E Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1017–1024. Clara Vania, Andreas Grivas, and Adam Lopez. 2018. What do character-level models learn about morphology? the case of dependency parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2573– 2583. Association for Computational Linguistics. Clara Vania and Adam Lopez. 2017. From character to words to in between: Do we capture morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2016–2027. Association for Computational Linguistics. A Appendix: Languages and Datasets The languages we use from Universal Dependencies and details about their treebanks are given in Table 4. Most of the treebanks we used in this paper are manually annotated (and then possibly automatically converted to their current format), except for German, English, and French, which are automatically annotated. For models trained in the Num Chars Lang ISO Treebank Train Dev Test Bulgarian BG BTB 0.7M 90K 88K Catalan CA AnCora 2.2M 0.3M 0.3M Czech CS PDT 6.9M 0.9M 1.0M Danish DA DDT 0.4M 56K 54K German DE GSD 1.6M 75K 0.1M English EN EWT 1.0M 0.1M 0.1M Spanish ES GSD 2.1M 0.2M 65K Farsi FA Seraji 0.6M 77K 78K French FR GSD 1.9M 0.2M 52K Hindi HI HDTB 1.3M 0.2M 0.2M Croatian HR SET 0.9M 0.1M 0.1M Italian IT ISDT 1.6M 67K 59K Latvian LV LVTB 0.7M 0.1M 0.1M Dutch NL Alpino 1.1M 65K 67K Polish PL LFG 0.6M 74K 74K Portuguese PT Bosque 1.1M 58K 55K Romanian RO RRT 1.1M 98K 93K Russian RU SynTagRus 5.3M 0.7M 0.7M Slovak SK SNK 0.4M 76K 80K Ukranian UK IU 0.5M 71K 98K Estonian ET EDT 2.2M 0.2M 0.3M Finnish FI TDT 1.2M 0.1M 0.2M Arabic AR PADT 1.3M 0.2M 0.2M Hebrew HE HTB 0.8M 63K 68K Table 4: Dataset statistics for Universal Dependencies (UD; v.2.3). Languages are grouped by typology, from top to bottom: fusional, agglutinative, and introflexive 1612 Num Chars Lang Vocab Train Dev Test CS 238 6.1M 0.4M 0.5M DE 298 13.6M 1.2M 1.3M EN 307 15.6M 1.5M 1.3M ES 307 11.0M 1.0M 1.3M FI 246 6.4M 0.7M 0.6M FR 272 12.4M 1.3M 1.6M RU 273 9.3M 1.0M 0.9M Table 5: Dataset statistics for Multilingual Wikipedia Corpus (MWC). Vocabulary size is based on the character vocabulary given in (Kawakami et al., 2017). fully-supervised MTL setting where UD is used for both LM and morphology supervision, we calculate the character vocabulary for each language by including any character that occurs more than 5 times in the training set of the language’s UD treebank. Dataset statistics for the Multilingual Wikipedia Corpus (MWC) are given in Table 5. When analyzing the effect of LM training dataset size on Czech and Russian, we also train models on the training portion of a larger version of the MWC corpus, MWC-large, which contains approximately twice as much training data as the standard MWC dataset. Specifically, MWC-large contains 10.2M training characters for Czech and 18.2M for Russian. There is no prior work that we know of that reports BPC on this larger dataset. For models trained on the disjoint supervision setting, we use the character vocabulary provided for each language in the MWC dataset (see Kawakami et al. (2017) for preprocessing details). In cases where we use two sources of supervision for the model – LM supervision from MWC and morphology supervision from UD – we use the MWC character vocabulary for all inputs, so that BPC results across models are comparable. This only affects a small number of the character types (11 or fewer for each language) in the UD training data. The character vocabulary provided in the MWC dataset and used for the distant supervision setting differs from the vocabulary calculated by including the characters that occur more than 25 times in the MWC training set.3 Because of this, our distant supervision setting on MWC is not comparable with Mielke and Eisner (2019), which uses the second vocabulary setting. Therefore, we re3On English, this preprocessing difference decreases the character vocabulary size from 307 in the provided vocabulary to 167. train our character LM baselines and multitasked models in this vocabulary setting (Table 6). We find that our LM and MTL models generally obtain slightly better performance on this setting, and we continue to see improvement from multitasking morphology over the character LM baseline. B Appendix: Model Parameters and Training To train all models presented in this paper, we use the Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of 0.002 and clip the norm of the gradient to 5. We also apply a dropout of 0.5 to each layer. We train each model on sequences of 150 characters and use early stopping with a patience of 10. We only use the language modeling performance (BPC) on the development set for early stopping and hyperparameter selection (and do not consider the morphology losses). For the UD language models, we train models with two hidden layers for 150 epochs with a batch size of 10. The models trained on MWC contain three hidden layers and are trained for 250 epochs with a batch size of 32. All of our models are implemented in Pytorch.4 For each language, we individually tuned the level at which we multitask the morphology objectives and the weighting ratio between the primary and auxiliary losses δ. We consider multitasking the morphology objective at either the first or second hidden layer (as all of our models have two hidden layers), and tune for each language δ = {0.01, 0.1, 0.5, 1, 1.5, 2}. The parameters chosen for each language and setting (fully supervised or distant MTL) are given in Table 7. C Appendix: Additional Results We provide the full set of results for our experiments in Section 5 on how well our CLMs model inflected forms versus uninflected forms across all 24 UD languages (Table 8). 4https://pytorch.org/ 1613 CS DE EN ES FI FR RU dev test dev test dev test dev test dev test dev test dev test Mielke BPE 1.88 1.856 1.45 1.414 1.45 1.386 1.42 1.362 1.70 1.652 1.36 1.317 1.63 1.598 Mielke Full 1.95 1.928 1.51 1.465 1.45 1.387 1.42 1.363 1.79 1.751 1.36 1.319 1.74 1.709 LM 2.01 1.975 1.52 1.493 1.45 1.395 1.55 1.482 1.74 1.705 1.60 1.565 1.72 1.692 MTL 1.81 1.771 1.43 1.414 1.32 1.262 1.33 1.268 1.69 1.658 1.15 1.104 1.62 1.596 Table 6: Results on Multilingual Wikipedia Corpus (MWC) in bits per character (BPC), trained on the vocabulary from Mielke and Eisner (2019). Distant MTL Fully-Supervised Lang MTL layer δ MTL layer δ BG 2 1.0 CA 1 2.0 CS 2 1.5 2 1.5 DA 2 0.5 DE 2 2.0 1 1.0 EN 2 1.0 2 1.0 ES 2 1.0 1 1.5 FA 1 0.01 FR 3 1.0 1 2.0 HI 1 2.0 HR 2 1.0 IT 1 2.0 LV 2 1.0 NL 2 1.5 PL 2 1.0 PT 2 1.5 RO 2 0.5 RU 3 1.0 2 1.5 SK 2 2.0 UK 2 1.0 ET 2 2.0 FI 1 0.5 2 1.0 AR 2 0.5 HE 2 0.5 Table 7: Language specific parameters for multitasked models trained in the distant MTL setting and the fullysupervised MTL setting Lang %Infl Word Type LM MTL ∆ BG 39% inflected 2.092 2.085 0.008 uninflected 2.333 2.330 0.002 CA 31% inflected 1.849 1.783 0.066 uninflected 2.007 1.943 0.064 CS 43% inflected 2.205 1.940 0.265 uninflected 2.539 2.322 0.217 DA 30% inflected 2.411 2.387 0.024 uninflected 2.559 2.552 0.007 DE 33% inflected 1.916 1.868 0.048 uninflected 2.323 2.263 0.060 EN 15% inflected 2.235 2.216 0.019 uninflected 2.579 2.571 0.008 ES 28% inflected 1.742 1.700 0.042 uninflected 2.053 2.010 0.043 FA 27% inflected 2.874 2.859 0.016 uninflected 2.499 2.492 0.007 FR 32% inflected 1.856 1.809 0.047 uninflected 2.228 2.174 0.054 HI 28% inflected 1.996 1.941 0.053 uninflected 2.270 2.228 0.042 HR 49% inflected 2.055 2.021 0.035 uninflected 2.507 2.487 0.021 IT 36% inflected 1.897 1.852 0.045 uninflected 2.056 2.010 0.046 LV 47% inflected 2.387 2.361 0.027 uninflected 2.782 2.758 0.024 NL 19% inflected 2.161 2.493 0.030 uninflected 2.131 2.468 0.025 PL 42% inflected 2.522 2.462 0.060 uninflected 2.633 2.578 0.054 PT 31% inflected 2.071 2.065 0.007 uninflected 2.214 2.205 0.009 RO 42% inflected 2.037 1.987 0.050 uninflected 2.373 2.316 0.057 RU 42% inflected 2.130 1.920 0.210 uninflected 2.583 2.424 0.159 SK 45% inflected 2.976 2.969 0.007 uninflected 3.545 3.535 0.010 UK 40% inflected 2.580 2.553 0.027 uninflected 2.553 2.956 0.009 ET 49% inflected 2.397 2.692 0.112 uninflected 2.285 2.629 0.063 FI 55% inflected 2.152 2.084 0.068 uninflected 2.402 2.339 0.063 AR 86% inflected 2.036 2.013 0.023 uninflected 3.856 3.828 0.027 HE 42% inflected 3.426 3.360 0.066 uninflected 2.168 2.211 -0.043 Table 8: BPC performance on the UD development set on inflected versus uninflected words. Bold delta values for each language indicate whether than language improves more on inflected or uninflected words by when multitasking morphology is added.
2019
156
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1614–1619 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1614 Historical Text Normalization with Delayed Rewards Simon Flachs, Marcel Bollmann, Anders Søgaard Department of Computer Science University of Copenhagen Denmark {flachs,marcel,soegaard}@di.ku.dk Abstract Training neural sequence-to-sequence models with simple token-level log-likelihood is now a standard approach to historical text normalization, albeit often outperformed by phrasebased models. Policy gradient training enables direct optimization for exact matches, and while the small datasets in historical text normalization are prohibitive of from-scratch reinforcement learning, we show that policy gradient fine-tuning leads to significant improvements across the board. Policy gradient training, in particular, leads to more accurate normalizations for long or unseen words. 1 Introduction Historical text normalization is a common approach to making historical documents accessible and searchable. It is a challenging problem, since most historical texts were written without fixed spelling conventions, and spelling is therefore at times idiosyncratic (Piotrowski, 2012). Traditional approaches to historical text normalization relied on hand-written rules, but recently, several authors have proposed neural models for historical text normalization (Bollmann and Søgaard, 2016; Bollmann, 2018; Tang et al., 2018). Such models are trained using characterlevel maximum-likelihood training, which is inconsistent with the objective of historical text normalization; namely, transduction into modern, searchable word forms. The discrepancy between character-level loss and our word-level objective means that model decision costs are biased. Our objective, however, is reflected by the standard evaluation metric, which is computed as the fraction of benchmark words that are translated correctly. In order to mitigate the discrepancy between the optimization method and the task objective, work has been carried out on using reinforcement learning to optimize directly for the evaluation metric (Ranzato et al., 2016; Shen et al., 2016). Reinforcement learning enables direct optimization of exact matches or other non-decomposable metrics, computing updates based on delayed rewards rather than token-level error signals. This paper contrasts maximum likelihood training and training with delayed rewards, in the context of sequence-to-sequence historical text normalization (Bollmann et al., 2017). Contributions We show that training with delayed rewards achieves better performance than maximum likelihood training across six different historical text normalization benchmarks; and that training with delayed rewards is particularly helpful for long words, words where maximum likelihood training leads to predicting long words, and for unseen words. We note that our approach differs from other applications in the NLP literature in using the mean reward as our baseline, and in comparing different reward functions; we also fine-tune relying only on rewards, rather than a mixture of cross entropy loss and rewards. 2 Historical text normalization datasets Historical text normalization datasets are rare and typically rather small. Most of them are based on collections of medieval documents. In our experiments, we include six historical text normalization datasets: the English, Hungarian, Icelandic, and Swedish datasets from Pettersson (2016); the German dataset introduced in Bollmann et al. (2017); and the Slovene “Bohoriˇc” dataset from Ljubeˇsi´c et al. (2016). We use these datasets in the form provided by Bollmann (2019), i.e., preprocessed to remove punctuation, perform Unicode normalization, replace digits that do not require normalization with a dummy symbol, and lowercase all tokens. Table 1 gives an overview of the datasets. 1615 Language Time Period Train IAT EN English 1386–1698 148k 75% DE German 14th–16th c. 234k 30% HU Hungarian 1440–1541 134k 18% IS Icelandic 15th c. 50k 47% SL Slovene 1750–1840s 50k 41% SV Swedish 1527–1812 24k 60% Table 1: Historical datasets with the time period they represent, the size of their training sets (in tokens), and the approximate percentage of tokens that are invariant across time (IAT), i.e. where the historical and normalized forms are identical. Note the differences in the number of words that are invariant across time, i.e., where the original input word form is the correct prediction according to the manual annotations. The differences are reasons to expect performance to be higher on English, but lower on Hungarian, for example; since it is easier to learn to memorize the input than to learn abstract transduction patterns. In practice, we see differences being relatively small. Performance on English, however, is significantly higher than for the other languages (see Table 2). 3 Normalization models Our baseline model is an LSTM-based encoderdecoder model with attention. The model receives as input a sequence of characters from the source vocabulary (i1, . . . , iN). Each character it is mapped to the corresponding randomly initialized embedding, which is then given as input to the bi-LSTM encoder. The decoder then uses the Bahdanau attention mechanism (Bahdanau et al., 2014) over the encoded representation to output a sequence of characters from the target vocabulary (o1, ..., oM). Note that the input and output sequences may differ in length. Both the encoder and decoder is composed of three layers with dimensionality 256. The character embeddings have 128 dimensions. For training our maximum likelihood baseline, we use the Adam optimiser initialized with a learning rate of 0.001 and default decay rates. In addition, we use a dropout probability of 20%. The model is trained with batch size 16 for 10 epochs with early stopping. All hyper-parameters were tuned on English development data. Algorithm 1: Reinforcement learning for the neural encoder-decoder model Input : Parallel Corpus, PC; MLE pretrained parameters, θ Output: Model parameters ˆθ 1 for (X, Y ) ∈PC do 2 ( ˆY1, ... ˆYk), (P( ˆY1), ...P( ˆYk)) = sample(X, k, ˆθ); 3 Q( ˆY ) = normalise(P( ˆY )); 4 ¯r = 0 ; // expected reward 5 for ˆYi ∈ˆY do 6 ¯r+ = Q( ˆYi) · A( ˆYi); 7 end 8 backprop(ˆθ, ¯r) ; // policy gradient 9 end 10 return ˆθ Policy gradient fine-tuning We use policy gradient training with delayed rewards for fine-tuning our models: We use maximum likelihood pretraining for 10 epochs (see above) and update our model based on policy gradients computed using the REINFORCE algorithm (Williams, 1992; Sutton et al., 1999). This enables us to optimize for delayed rewards that are non-decomposable. Specifically, we directly minimize a distance function between strings, e.g., Levenshtein distance, by using negative distance as a delayed reward:1 R( ˆY ) = −Levenshtein(Y, ˆY ). REINFORCE maximizes the expected reward, under some probability distribution P( ˆY |θ), parameterized by some θ. This way, the cost function, J(θ), is defined as the negative expected reward: J(θ) = −E ˆY ∼P( ˆY |θ)[R( ˆY )]. From this cost function, the PG can be derived as: PG = ∇θJ(θ) (1) = −E ˆY ∼P( ˆY |θ)[∇θ log P( ˆY ) · R( ˆY )] (2) We refer the reader to prior work for the full derivation (Williams, 1992; Karpathy, 2016). In Equation (2), there is no need to differentiate R( ˆY ), and policy gradient training therefore is possible with non-differentiable reward functions (Karpathy, 2016). To explore the search space, we use a stochastic sampling function S(X) that, given an input sequence X, produces k sample 1In §4, we compare using Levenshtein, Hamming, and Jaro-Winkler distance, with Levenshtein being consistently superior. 1616 hypotheses ˆY1, . . . , ˆYk. The hypotheses are generated by, at each time step, sampling actions based on the multinomial probability distribution of the policy. In order to reduce the search space, we sample only from the ten most likely actions at each time step. Furthermore, duplicate samples are filtered. In practice, we do not optimize directly for the reward R( ˆY ). Instead we replace it with the advantage score (Weaver and Tao, 2001; Mnih and Gregor, 2014): A( ˆY ) = R( ˆY ) −b, where b is a baseline reward (Weaver and Tao, 2001), introduced to reduce the variance in the gradients. We use the mean reward over the samples as our baseline reward. This way, the advantage scores of the samples will be centered around 0, meaning that about half of the produced samples will be encouraged and about half will be discouraged (Karpathy, 2016). We also found it necessary to normalize the probability distribution P( ˆY |X; θ) over the samples from S(X). We follow Shen et al. (2016) and define a probability distribution Q( ˆY |X; θ, α) over the subspace of S(X). Q( ˆY |X; θ, α) = P( ˆY |X; θ)α P ˆY ′∈S(X) P( ˆY ′|X; θ)α (3) This function is essentially a smoothing function over the sample probabilities, with a hyperparameter α that controls the level of smoothing. We follow Shen et al. (2016) and set α = 0.005. With these alterations, our cost function and gradients can be defined as: J(θ) = −E ˆY ∈S(X)[A( ˆY )] (4) PG = −E ˆY ∈S(X)[∇θ log Q( ˆY ) · A( ˆY )] (5) The algorithm is described in pseudocode in Algorithm 1. We optimized hyper-parameters the same way we optimized our baseline model hyperparameters. Compared to the baseline, the policy gradient model’s optimal batch size is bigger (64), and the learning rate is smaller (0.00001). Both strategies are known to increase generalization, by increasing the scale of random fluctuations in the SGD dynamics (Smith and Le, 2018; Balles et al., 2017). 4 Experiments Our experiments compare maximum likelihood training and policy gradient training across six historical text normalization datasets (cf. Table 1). Figure 1: Different reward functions on Icelandic (dev) MLE MLE+PG Error red. EN 92.76 94.18 20% DE 87.36 88.42 8% HU 86.68 88.15 11% IS 85.03 86.05 7% SL 91.16 93.92 31% SV 92.99 93.74 11% Table 2: Comparison of maximum likelihood training (MLE) and policy gradient fine-tuning (MLE+PG), given in word-level accuracy in percent, as well as the error reduction between MLE and MLE+PG. We optimized hyper-parameters on the English development data and used the same hyperparameters across the board (see above). Distance metric We also treated the distance metric used as our reward function as a hyperparameter. Figure 1 shows a comparison of three reward functions on the Icelandic development data: (i) the Levenshtein distance, which is the number of character operations (substitute, insert, delete) to transform one string into another; (ii) the Hamming distance, which is the number of positions at which the corresponding characters of two strings of equal length are different (we pad the shorter of the two strings with spaces); and (iii) the Jaro-Winkler distance (Cohen et al., 2003), which is a distance metric designed and best suited for short strings such as person names. Levenshtein outperforms Hamming and Jaro-Winkler distance on the English development data, as well as on the Icelandic development data. We therefore use the Levenshtein distance as the reward function in our experiments. 1617 GOLD LENGTH MLE LENGTH MLE BACKOFF IDENTICAL UNSEEN WORDS EN ∗∗0.09 ∗∗0.14 ∗∗-0.20 ∗∗-0.07 ∗∗0.10 DE ∗∗0.11 ∗∗0.10 ∗∗-0.08 -0.05 ∗∗0.12 HU ∗∗0.09 ∗∗0.11 ∗∗-0.07 ∗∗-0.03 ∗∗0.10 IS ∗∗0.04 ∗∗0.05 0.02 ∗∗0.08 ∗∗0.05 Table 3: Correlations (Pearson’s r) between improvements with reinforcement learning and datapoint characteristics; ** denotes significance with p < 0.001. Results The results are presented in Table 2.2 Generally, we see that policy gradient fine-tuning improves results across the board. For English, the error reduction is 20%. For German, Hungarian, Icelandic, Slovene, and Swedish, the error reduction is smaller (7–16%), but still considerable and highly significant (p < 0.01). Tang et al. (2018) do show, however, that multi-headed attention architectures (Vaswani et al., 2017) generally seem to outperform sequence-to-sequence models with attention for historical text normalization. This is orthogonal to the analysis presented here, and similar improvements can likely be obtained by multiheaded attention architectures. Analysis To avoid bias from small, high variance datasets, we limit error analysis to English, German, Hungarian, and Icelandic. In Table 3, we present correlation scores between our observed improvements and characteristics of the data.3 We consider the following characteristics: 1. GOLD LENGTH: Reinforcement learning with delayed rewards can potentially mitigate error propagation, and we do observe that gains from reinforcement learning, i.e., the distribution of correct normalizations by reinforcement learning that our baseline architecture classified wrongly, correlate significantly with the length of the input across all four languages. 2. MLE LENGTH: The correlations are even stronger with the length of the output of 2Note that for the MLE baseline, we performed our own hyperparameter tuning, which results in a different configuration than used in previous work (e.g., Bollmann et al., 2017; Tang et al., 2018). We observe that our baseline is weaker than the models reported in Bollmann (2019), but even so, the MLE+PG approach yields state-of-the-art performance on the Slovene dataset. 3Correlations are Pearson’s r. Samples are big enough to motivate a parametric test, but we obtain similar coefficients and significance levels with Spearman’s ρ. the MLE model. This suggests that reinforcement learning – or policy gradient training – is particularly effective on examples for which maximum likelihood training tends to predict long normalizations. 3. MLE BACKOFF: We also correlate gains with the distribution of instances on which the MLE backed off to predicting the original input word form. Here, we see a negative correlation, suggesting our baseline is good at predicting when the word form is invariant across time. 4. IDENTICAL: The three trends above are all quite strong. Our fourth variable is when input and output are identical (invariant across time). Here, we see mixed results. Policy gradient gains correlate negatively with invariance in English, but positively in Icelandic. 5. UNSEEN WORDS: Finally, we correlate gains with whether words had been previously seen at training time. Our policy gradient finetuned model performs much better on unseen words, and especially for English, German, and Hungarian, we see strong correlations between improvements and unseen words. Our predictions also exhibit smaller Levenshtein distances to the annotations compared to our baseline model, e.g., 0.11 vs. 0.14 for English, respectively, and 0.20 vs. 0.23 for German. 5 Conclusions Our experiments show that across several languages, policy gradient fine-tuning outperforms maximum likelihood training of sequence-tosequence models for historical text normalization. Since historical text normalization is a characterlevel transduction task, it is feasible to experiment with reinforcement learning, and we believe 1618 our results are very promising. In our error analysis, we, in addition, observe that reinforcement learning is particularly beneficial for long words and unseen words, which are probably the hardest challenges in historical text normalization. Acknowledgments Simon Flachs was supported by a PhD grant from Innovation Fund Denmark. Anders Søgaard was supported by a Google Focused Research Award. Marcel Bollmann was partly funded from the European Union’s Horizon 2020 research and innovation programme under the Marie SklodowskaCurie grant agreement No 845995. We gratefully acknowledge the donation of a Titan Xp GPU by the NVIDIA Corporation. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Lukas Balles, Javier Romero, and Philipp Hennig. 2017. Coupling adaptive batch sizes with learning rates. In Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence (UAI). Marcel Bollmann. 2018. Normalization of historical texts with neural network models. Bochumer Linguistische Arbeitsberichte, 22. Marcel Bollmann. 2019. A large-scale comparison of historical text normalization systems. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3885–3898. Association for Computational Linguistics. Marcel Bollmann, Joachim Bingel, and Anders Søgaard. 2017. Learning attention for historical text normalization by learning to pronounce. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 332–344, Vancouver, Canada. Association for Computational Linguistics. Marcel Bollmann and Anders Søgaard. 2016. Improving historical spelling normalization with bidirectional LSTMs and multi-task learning. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 131–139, Osaka, Japan. The COLING 2016 Organizing Committee. William W. Cohen, Pradeep Ravikumar, and Stephen E. Fienberg. 2003. A comparison of string metrics for matching names and records. In KDD Workshop on Data Cleaning and Object Consolidation. Andrej Karpathy. 2016. Deep reinforcement learning: Pong from pixels. Retrieved on 21-10-2017. Nikola Ljubeˇsi´c, Katja Zupan, Darja Fiˇser, and Tomaˇz Erjavec. 2016. Normalising Slovene data: historical texts vs. user-generated content. In Proceedings of the 13th Conference on Natural Language Processing (KONVENS 2016), volume 16 of Bochumer Linguistische Arbeitsberichte, pages 146–155, Bochum, Germany. Andriy Mnih and Karol Gregor. 2014. Neural variational inference and learning in belief networks. In Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pages 1791–1799, Bejing, China. PMLR. Eva Pettersson. 2016. Spelling Normalisation and Linguistic Analysis of Historical Text for Information Extraction. Doctoral dissertation, Uppsala University, Department of Linguistics and Philology, Uppsala. Michael Piotrowski. 2012. Natural Language Processing for Historical Texts. Number 17 in Synthesis Lectures on Human Language Technologies. Morgan & Claypool. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2–4, 2016, Conference Track Proceedings. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1683–1692, Berlin, Germany. Association for Computational Linguistics. Samuel L. Smith and Quoc V. Le. 2018. A Bayesian perspective on generalization and stochastic gradient descent. In International Conference on Learning Representations. Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. In Proceedings of the 12th International Conference on Neural Information Processing Systems, NIPS’99, pages 1057–1063, Cambridge, MA, USA. MIT Press. Gongbo Tang, Fabienne Cap, Eva Pettersson, and Joakim Nivre. 2018. An evaluation of neural machine translation models on historical spelling normalization. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1320–1331, Santa Fe, New Mexico, USA. Association for Computational Linguistics. 1619 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Lex Weaver and Nigel Tao. 2001. The optimal reward baseline for gradient-based reinforcement learning. In Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence (UAI). Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229–256.
2019
157
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1620–1629 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1620 Stochastic Tokenization with a Language Model for Neural Text Classification Tatsuya Hiraoka1, Hiroyuki Shindo2, and Yuji Matsumoto2,3 1Tokyo Institute of Technology 2Nara Institute of Science and Technology 3RIKEN Center for Advanced Intelligence Project (AIP) [email protected] 2{shindo,matsu}@is.naist.jp Abstract For unsegmented languages such as Japanese and Chinese, tokenization of a sentence has a significant impact on the performance of text classification. Sentences are usually segmented with words or subwords by a morphological analyzer or byte pair encoding and then encoded with word (or subword) representations for neural networks. However, segmentation is potentially ambiguous, and it is unclear whether the segmented tokens achieve the best performance for the target task. In this paper, we propose a method to simultaneously learn tokenization and text classification to address these problems. Our model incorporates a language model for unsupervised tokenization into a text classifier and then trains both models simultaneously. To make the model robust against infrequent tokens, we sampled segmentation for each sentence stochastically during training, which resulted in improved performance of text classification. We conducted experiments on sentiment analysis as a text classification task and show that our method achieves better performance than previous methods. 1 Introduction Tokenization is a fundamental problem in text classification such as sentiment analysis (Tang et al., 2014; Kim, 2014; dos Santos and Gatti, 2014), topic detection (Lai et al., 2015; Zhang et al., 2015), and spam detection (Liu and Jia, 2012; Liu et al., 2016). In text classification with neural networks, sentence representation is calculated based on tokens that compose the sentence. Specifically, a sentence is first tokenized into meaningful units such as characters, words, and subwords (Zhang et al., 2015; Joulin et al., 2017). Then, the token embeddings are looked up and fed into a neural network encoder such as a feed-forward neural network (Iyyer et al., 2015), a convolutional neural network (CNN) (Kim, 2014; Kalchbrenner et al., 2014), or a long short-term memory (LSTM) network (Wang et al., 2016a,b). For English and other languages that use the Latin alphabet, the whitespace is a good indicator of word segmentation. However, tokenization is a non-trivial problem in unsegmented languages such as Chinese and Japanese since they have no explicit word boundaries. For these languages, tokenizers based on supervised machine learning with a dictionary (Zhang et al., 2003; Kudo, 2006) have been used to segment a sentence into units (Lai et al., 2015). In addition, we use a neural network-based word segmenter to tokenize a raw corpus in Chinese text classification (Zhou et al., 2016; Zhang and Yang, 2018). In machine translation, subword tokenization with byte pair encoding (BPE) addresses the problem of unknown words and improves performance (Sennrich et al., 2016). However, segmentation is potentially ambiguous, and it is unclear whether preset tokenization offers the best performance for target tasks. To address this problem, in this paper, we propose a new tokenization strategy that segments a sentence stochastically and trains a classification model with various segmentations. During training, our model first segments sentences into tokens stochastically with the language model and then feeds the tokenized sentences into a neural text classifier. The text classifier is trained to decrease the cross-entropy loss for true labels, and the language model is also learned with the sampled tokenization. This enables the model to segment the test dataset by taking into account recent tokenization in training. We find that sampling the tokens of a sentence stochastically renders the text classifier more robust to tokenization. Additionally, updating the language model improves the performance of the test set. 1621 公司也未修正 Stochastic Tokenization Text Classifier 公司也未修正 Deterministic Tokenization Text Classifier 公司/ 也/ 未/ 修正 Possible Tokenization Language Model reflect sampled tokenization on language model Tokenizer 公司/ 也/ 未/ 修正 Sampled Tokenization company has not corrected 修 正 公司也 公 司 未修正 公司 未 也 修正 < > Figure 1: (Top) Schematic of the previous classification model with deterministic tokenization, and (bottom) our proposed model, which tokenizes a raw sentence stochastically with a language model and updates it by sampled tokens in the training phase. < and > in the lattice are special tokens indicating the beginning and end of a sentence, respectively. We input a tokenized sentence into the neural text classifier, and it is trained with its gold label. 2 Neural Text Classification Text classification refers to the classifying of a sentence into a corresponding label. Typically, a neural network text classifier represents the sentence s = t1...tn...tN as a vector vs and predicts the distribution of labels by transforming the vector. For example, vs is given by a forward LSTM as ctoken n , htoken n = LSTM(ctoken n−1 , htoken n−1 , vtn) vs = htoken N (1) where tn is the n-th token composing a sentence of length N, and vtn is the vector for token tn. h and c are output vectors and cell states of LSTM, respectively. The N-th output vector htoken N of LSTM is assigned to the token vector vs. The token vector vt is obtained by concatenating a token-level representation vtoken and a character-level representation vchar as follows: vt = W cat(vtoken t ; vchar t ) + bcat (2) where vtoken t is extracted from a lookup table, and vchar t is calculated by a single-layered and unidirectional LSTM from embeddings of the characters composing the token as well as the token-level LSTM (1). W cat and bcat are parameters. The probability p(ys = u|vs) that the sentence class ys is a u-th class is calculated by a decoder with a linear layer as p(ys = u|vs) = softmax(W decvs + bdec)u (3) where W dec and bdec are the parameters, and softmax(·) refers to the softmax function. (·)u is the u-th element of a vector. The neural text classifier is trained with the optimizer to minimize cross-entropy loss for gold labels. 3 Proposed Model 3.1 Model Outline We focus on the tokenization of neural text classification. During the training phase of text classification, the proposed model tokenizes an input sentence stochastically in every epoch with a language model. A neural text classifier takes the tokenized sentence and predicts a label for the sentence. In the evaluation, our model tokenizes the test set by the Viterbi algorithm with a language model. When sampling tokenization in training, we consider that the model can achieve higher performance by tokenizing test data under the same criterion used in training. For example, when a classification model is trained with the word “anthropology” tokenized as “an/thro/polo/gy,” the similar word “anthropological” in the test data should be tokenized as “an/thro/polo/gical” rather than “anthro/polo/g/ical.” To realize this, our model 1622 updates its language model depending on the currently sampled tokens in the training phase. Algorithm 1 outlines our model. After setting the initial language model and a classifier model, every sentence in a mini-batch for training is tokenized stochastically, and the language model is updated based on the tokenized sentence. An example of our model’s processing is illustrated at the bottom of figure 1. Compared with conventional text classification with deterministic tokenization, our model incorporates a language model into the training process and trains in both tokenization and text classification simultaneously. Algorithm 1 Learning Algorithm 1: set/train a language model LM 2: set a classifier model CM 3: while epoch < maxEpoch do 4: for each miniBatch do 5: for each sentence s in miniBatch do 6: ts = tokenize s with LM 7: update LM with ts 8: end for 9: update CM with miniBatch 10: end for 11: end while 3.2 Nested Unigram Language Model To sample tokens for a sentence, we employed a nested unigram language model, which was proposed as a Bayesian framework for word segmentation (Goldwater et al., 2009). When a token t consists of M characters; that is, t = c1...cm...cM, its unigram probability p(t) in a text data is given as p(t) = count(t) + αpbase(t) P ˆt count(ˆt) + α (4) where count(t) is a function that returns the number of tokens t in the text data. pbase(t) gives the basic probability of the token t with a characterlevel language model: pbase(t : c1...cM) = puni(c1) M Y m=2 pbi(cm|cm−1) (5) To deal with a token that includes an unknown character, both puni(cm) and pbi(cm|cm−1) are also calculated by a smoothed language model. A smoothed character unigram probability puni(cm) is given as puni(cm) = count(cm) + β( 1 Y ) Y + β Y = X ˆc count(ˆc) (6) A smoothed character bigram probability pbi(cm|cm−1) is also given as pbi(cm|cm−1) = count(cm|cm−1) + γpuni(cm) count(cm−1) + γ (7) where Y is the total number of characters, and count(cm|cm−1) is the number of character bigrams. 1/Y in (6) and puni(cm) in (7) are base probabilities of the character unigram and the character bigram, respectively. α, β, and γ are hyperparameters for smoothing language models. By setting higher values for these hyperparameters, the model associates a higher probability to out-of-vocabulary (OOV) tokens. The result of this association is that the model selects OOV tokens more frequently when sampling. We use a dictionary-based morphological analyzer or unsupervised word segmentation to tokenize a corpus initially, and the language model is initialized with the tokenized corpus. 3.3 Sampling Tokenization With the nested unigram language model introduced above, the tokenization of a sentence is sampled from the distribution P(t|s) where t is possible tokenization for the sentence. A probability of tokenization is obtained by a nested language model (4) as p(t|s) = Q t∈t p(t). Following (Kudo, 2018) and (Mochihashi et al., 2009), we employ a dynamic programming (DP) technique called forward filtering backward sampling (FFBS) (Scott, 2002) to sample tokens stochastically. With FFBS, we can sample tokens in a sentence from a distribution considering all possible tokenizations within the limit of the maximum token length l. In the forward calculation of FFBS, a DP Table D is calculated as follows: D[i][j] = p(si−j:i) min(i−j,l) X k=1 D[i −j][k] D[0][1] = 1 (8) where i is the index of a character in a sentence s composed of c1...ci−j...ci...cI, and j is the length 1623 of a token. si−j:i is a token that consists of ci−j...ci, and p(si−j:i) is given by (4). D[i][j] is the marginalized probability that the token si−j:i appears in the sentence. An example of the forward calculation is illustrated in Figure 2. In the figure, the probability of a two-length token that ends with the sixth character is calculated recursively when the maximum length of a word is 3. After completing Table D, we can sample tokenization from the tail of a sentence with D. Note that our model uses the whitespaces in the sentence as the token boundaries when processing languages indicating word boundaries such as English. Figure 2: An example of forward calculation of forward filtering backward sampling for a Chinese sentence used in figure 1 with maximum length 3. In this figure, we illustrate the calculation of D[6][2]. 3.4 Updating of the Language Model To update the language model with a tokenized sentence, we follow the updating method of blocked Gibbs sampling for unsupervised word segmentation (Mochihashi et al., 2009). Before sampling tokenization, the token counts of the sentence are removed from the language model (4) and the new tokenization is sampled with the language model. After sampling, the language model is updated by adding the token counts in a currently tokenized sentence. Specifically, count(t) in (4) is reduced for every token t included in a sentence. count(c) is also reduced for all character cs included by t. We handle the adding process in the same way. By updating the language model, when evaluating the classifier on validation and test datasets, our model can reproduce the segmentation sampled in the training phase. This updating method ensures that the tokenization is consistent between training and evaluation, particularly for a sentence containing a low frequency phrase. 3.5 Embedding for Unfixed Vocabulary Since our model does not limit the vocabulary, there are many ways to tokenize a single sentence. To use token-level representations, we typically employ a lookup embedding mechanism, which requires a fixed vocabulary. In our model, however, the vocabulary changes as the language model is updated. We, therefore, introduce word embeddings with continuous cache inspired by (Grave et al., 2016; Kawakami et al., 2017; Cai et al., 2017). This method enables the proposed model to assign token-level representations to recently sampled tokens. Although embeddings of older tokens are discarded from the cache memory, we assume that meaningful tokens to solve the task appear frequently, and they remain in the cache during training if the size of the cache is large enough. By updating representations in the cache, the model can use token-level information adequately. In our embedding mechanism with a cache component, the model has a list Q that stores |Q| elements of recent tokenization history. The model also keeps a lookup table V cache composed of token-level vectors corresponding to tokens cached in Q. A token t is stored in Q, and each element in Q has a unique index q to extract the representation from V cache. A token-level embedding of the token vtoken t is obtained by extracting a vector vcache q from V cache. q is an index corresponding to the token t if t is in the list Q; otherwise, the oldest token in Q drops from the list, and we assign its index q to the new token t. The representation for the new token vcache q is initialized with vchar t mentioned in section 2, and the vector for the old token that drops from the list is discarded. This embedding process is described as: vtoken t = ( V cachekt (t ∈Q) vchar t (otherwise) (9) where kt is a one-hot vector whose q-th element indicating t is 1 and 0 otherwise. A token representation obtained by cacheembedding is used as a lookup representation vtoken and transformed into a concatenated token vector vt by (2). The lookup table V cache is dealt with as a general lookup table, and we update it 1624 with gradients obtained by backward calculation from the loss function. In the evaluation phase, Q is not changed by unknown tokens in the validation and test set. 4 Experiments 4.1 Setup Dataset: To evaluate the differences caused by tokenization and embedding, we conducted experiments on short-text sentiment classification tasks. We exploited formal and informal corpora in Chinese, Japanese, and English. The NTCIR-6 dataset (Seki et al., 2007) contains newspaper articles on 32 topics in Chinese, Japanese, and English. We extracted only the sentences attached with three-class sentiment labels. and 1 to 28 topics were used for training, 29 to 30 topics for validation, and 31 to 32 topics for testing. As a social short-text dataset, we used Twitter datasets in the Japanese1 and English2 experiments. These datasets were annotated with fiveclass sentiment labels in Japanese and two-class sentiment labels in English, and 21,000 sentences were randomly selected in a well-balanced manner. We split the corpus into 18,000 for training, 2,000 for validation, and 1,000 for testing in both Japanese and English. We used ChnSentiCorp (HOTEL)3 as another dataset for Chinese sentiment classification on informal texts. We did not use any other resource except for the training datasets. Model: To compare the results from different tokenization on text classification, we used the simple neural text classifier described in section 2 in all the experiments 4. As the token vector vt mentioned in (1), we used representations by different tokenization and embedding. We initialized randomly and trained token-level and character-level representations with a classification task. The sizes of a token representation and a character representation were set as 512 and 1http://bigdata.naist.jp/˜ysuzuki/ data/twitter/ 2https://www.kaggle.com/c/ twitter-sentiment-analysis2 3http://tjzhifei.github.io/resource. html 4We conducted the same experiment with Deep Average Network (DAN) (Iyyer et al., 2015) rather than LSTM and obtained similar results. We report the experiment with the LSTM classifier because the results are more significant than the results with DAN. 128, respectively, and the size of a sentence representation was 1,024. The sentence representation was projected to a label-size vector depending on the dataset, and probabilities for labels were obtained using the softmax function. The main results are shown in Table 1. We compared the scores obtained by models trained with different tokenization. In the table, “dictionary” means the model trained with dictionarybased tokenization. Chinese and Japanese datasets are tokenized by Jieba5 and MeCab, respectively, and the English dataset is tokenized by original whitespaces. As a baseline model that samples tokenization, we employed SentencePiece implementation6. We used SentencePiece in both options with/without sampling (“subword” / “subword+samp”). We set the subword size as 6,000 for NTCIR in English and 8,000 for the others7. Our model is denoted as “proposed” in the table. “sp” represents the proposed method whose language model is initialized with dictionary-based tokenization, and “unsp” represents the model initialized with unsupervised word segmentation. The sizes of the cache for the proposed model were the same as the sizes of the subword vocabulary for SentencePiece. We set the maximum length of the token for our method as eight for every language. When initializing the language model with a dictionary-based tokenization, the corpus was retokenized into tokens shorter than eight characters depending on the language model. The hyperparameters for smoothing were set as α = β = γ = 1 in both pretraining for unsupervised word segmentation and training for classification. Dropout layers were used for embedding and sentence representations with a rate of 0.5. We used the softmax cross-entropy loss for optimization, and the parameters were optimized by Adam (Kingma and Ba, 2014). We trained the models in 30 epochs, and the model with the highest score on the validation dataset was selected and evaluated on the test dataset. In this paper, we report the average F1 score in five experiments. 5https://github.com/fxsjy/jieba 6https://github.com/google/ sentencepiece 7Although we should have set the subword size for the English NTCIR as 8,000 as well as the other datasets, we had to use 6,000 because the English dataset was too small to make more than 8,000 subwords with SentencePiece. 1625 Chinese Japanese English NTCIR HOTEL NTCIR TWITTER NTCIR TWITTER dictionary 50.21 85.28 55.54 65.00 49.52 71.40 subword 50.95 86.45 52.87 66.25 52.19 72.65 subword+samp 51.32 87.61 51.36 66.25 53.90 73.15 proposed(sp) 50.91 86.62 58.27 66.50 56.73 73.66 proposed(unsp) 49.54 87.29 53.07 67.75 54.09 74.80 Table 1: F1 scores (%) from the models trained with different methods of tokenization. The highest scores among all methods are highlighted by bold font, and the highest scores among unsupervised tokenization models are highlighted with an underline. 4.2 Results First, we analyzed the overall results of the experiment. The highest scores among all tokenization methods are highlighted by bold font in Table 1. As shown in the table, the proposed method obtained the best scores in the Japanese and English datasets. SentencePiece with a sampling option, however, scored the highest in the Chinese datasets. This is because the Chinese vocabulary is larger than the Japanese and English vocabularies. In other words, Chinese datasets have a larger number of types of n-grams. We consider that the cache size of the proposed method is not sufficient to store meaningful words to solve the task of the Chinese dataset. Second, we focus on the results by the supervised methods (“dictionary” and “proposed(sp)”). The language model of the proposed method “sp” is initialized by corpus segmented by the “dictionary” method and trained by sampled tokenization while training a classifier. The table shows that the scores from our method surpassed the dictionarybased segmentation for all datasets. We conclude that the proposed method is superior to the method that trains a classifier with dictionary-based tokenization. Third, we analyzed the scores obtained using unsupervised methods (“subword”, “subword+samp”, and “proposed(unsp)”). The highest scores among the unsupervised methods are emphasized by an underline. The proposed method obtained the best scores for the Japanese and English datasets, but SentencePiece was superior for the Chinese dataset as described in the overall comparison. Finally, we compare the proposed methods. The proposed model whose language model is initialized by a dictionary (“sp”) obtained higher scores on the NTCIR dataset in every language. On the other hand, the model with unsupervised initialization scored higher on SNS dataset for all languages. From these results, we conclude that the performance of our model improved with dictionary-based segmentation for formal corpus while unsupervised initialization improved the performance of informal corpus when a generally used dictionary was employed. 5 Discussion 5.1 Cache Size In the main experiment described in the previous section, we set the size of the cache for tokenlevel embedding to be the same as the vocabulary of SentencePiece for a fair comparison. As explained, the scores of our model for the Chinese dataset were lower than the scores for SentencePiece with sampling. We consider that this result was caused by the cache-overflow of the vocabulary. Therefore, we conducted an additional experiment where the size of the cache was increased. The results are shown in table 2. The cache size of the model denoted as “x2” is twice the size (16,000) of the model used in Table 1. From the result, we conclude that increasing the size of the cache improves the performance of the proposed model for the Chinese datasets. We also determine that the size of the cache used in the main experiment is sufficient to store meaningful words for the task in Japanese and English. Figure 3 shows the performances of different cache sizes on two Chinese datasets, and Table 3 shows the vocabulary sizes of the language models at the beginning of a classifier training on each dataset. From the result of the experiment on the Chinese dataset, we conclude that increasing the cache size improves performance. We also conclude that we can use the size of vocabulary of the initial language model as an indicator to set the cache size. In the figure, the performance in1626 Chinese NTCIR HOTEL subword+samp 51.32 87.61 proposed(sp) 50.91 86.62 proposed(sp)x2 51.45 87.45 proposed(unsp) 49.54 87.29 proposed(unsp)x2 51.32 88.29 Table 2: F1 scores (%) by models with different cache size for the proposed model on Chinese datasets. The size of the cache of proposed models when “+x2” is 16,000 and 8,000 otherwise. The best model in table 1 “subword+samp” is quoted as a baseline model. 84 85 86 87 88 89 90 46 47 48 49 50 51 52 4000 8000 16000 32000 HOTEL(CH) F1-score(%) NTCIR(CH) F1-score(%) cache size NTCIR(CH, sp) NTCIR(CH, unsp) HOTEL(CH, sp) HOTEL(CH, unsp) Figure 3: F1 scores (%) by models trained with different cache sizes on NTCIR(CH) and HOTEL(CH). Lanugage Dataset sp unsp Chinese NTCIR 29,299 28,232 HOTEL 22,170 15,816 Japanese NTCIR 7,623 10,139 Twitter 33,831 17,013 English NTCIR 11,449 3,544 Twitter 50,733 6,859 Table 3: The volume of vocabulary of the language model initialized by dictionary (sp) and unsupervised word segmentation (unsp) for each dataset. creases up to the cache size around the size of the initial vocabulary. In addition, we consider from the vocabulary sizes of the Japanese and English Twitter dataset that it is important to select an appropriate tokenizer for initialization. Although the initial vocabulary is huge for the Japanese and English datasets, the cache size is sufficient to store the useful words for classification. We consider the reason is that there are many similar but different words in the vast vocabulary of the Twitter dataset unlike the Chinese dataset, and the difference becomes small using the language model. 5.2 Sampling Option Our model has two other options for sampling tokenization: a model without sampling in the training phase (“nosamp”) and a model that samples tokenization without updating the language model (“samp”). The former means that the model tokenizes a sentence into the one-best tokenization with an initialized unigram language model while the latter can be described as a vocabulary-free version of SentencePiece. We tested this comparison on the models with dictionary-based initialization (“sp”) and the 500-epoch pretrained models (“unsp”). Table 4 shows the results. Our proposed model updating a language model is denoted as “train.” The results show that higher scores are given by updating the language model (“train”) on all the datasets. While we cannot determine comprehensively whether performance is improved by sampling without updating a language model (“samp”), from the results, we argue that the performance of the classification task is improved by sampling tokenization and updating its language model. 5.3 Case Study Figure 4 shows distributions for each label for different tokenizations for a sentence in a validation set of a Japanese Twitter dataset. Each distribution is calculated by the same model that samples tokenization and updates its language model. In the figure, “INITIAL” means a prediction by a model inputted into a sentence tokenized by an initial language model. In other words, “INITIAL” shows the prediction by the model without updating the language model. As shown in the figure, the model predicts different labels for each tokenization. The model feeding tokenization by an updated language model predicts a true label while the model with tokenization by the initial language model predicts a wrong label with a higher probability. In this example, the difference of tokenization on “電源ボ タン” and “ほしかった” has A significant effect on the prediction. Although this example was remarkable, there were many sentences where the model predicted different labels by its tokenization. 1627 Chinese Japanese English NTCIR HOTEL NTCIR TWITTER NTCIR TWITTER sp+nosamp 50.36 86.28 53.96 66.25 52.85 72.80 sp+samp 50.27 85.28 51.98 66.25 53.33 72.40 sp+train 50.91 86.62 58.27 66.50 56.73 73.66 unsp+nosamp 49.08 86.95 52.80 65.37 53.80 73.90 unsp+samp 48.95 85.95 48.35 66.37 53.80 73.60 unsp+train 49.54 87.29 53.07 67.75 54.09 74.80 Table 4: F1 scores (%) of the ablation study to compare the sampling options of the proposed model. “samp” represents a model that samples tokenization without updating its language model while “train” updates its language model depending on the sampled tokenization. 0 0.2 0.4 0.6 0.8 1 pos+neg pos neg neutral unrelated [INITIAL] Xperia_ _ __ ___ _ [UPDATED] Xperia___ _ _ ___ _ 0 0.2 0.4 0.6 0.8 1 pos+neg pos neg neutral unrelated [INITIAL] Xperia_ _ __ ___ _ [UPDATED] Xperia___ _ _ ___ _ Label Probability Figure 4: Label prediction by a trained model for a sentence with different tokenization. “ ” denotes word boundary, and the sentence means “I wish they had made the button on the Xperia more responsive.” True label for the sentence is “neg”. The sentence indicated by “UPDATED” is tokenized by a language model updated with sampled tokenization while “INITIAL” is segmented by an initial language model. 5.4 News-title Classification In addition to sentiment analysis, we also evaluated our model on other domains of text classification, topic classification. We employed Japanese web news corpus provided by Livedoor8 and a model classified article titles into a label of nine topics. The experiment was conducted under the same condition as the sentiment analysis described in section 4. As shown in Table 5, the proposed method with unsupervised initialization obtained the highest score. In addition to the result of sentiment analysis, we also determined that the performance improved by initializing the language model with dictionary-based tokenization. From the result, we conclude that our new tokenization strategy is effective on some classification tasks. 8https://www.rondhuit.com/download. html#ldcc F1-score dictionary 80.31 subword 80.41 subword+samp 78.95 proposed(sp) 81.71 proposed(unsp) 80.46 Table 5: F1-scores (%) using the models with different tokenizations on the news-title classification task (Japanese). 6 Related Work Our work is related to word segmentation for a neural network encoder. To tokenize a sentence into subwords without dictionary-based segmentation, BPE is commonly used in neural machine translation (NMT) (Sennrich et al., 2016). BPE forces a merger of tokens without any exceptions, and tokenization does not become natural. The problem associate with BPE has been addressed using a language model to tokenize a sentence. (Goldwater et al., 2006, 2009) proposed unsupervised word segmentation by sampling tokenization and updating a language model with Gibbs sampling. The language model for unsupervised word segmentation is smoothed with base probabilities of words to give a probability for all possible words in a text. (Mochihashi et al., 2009) extended this to the use of blocked Gibbs sampling, which samples tokenization by a sentence. The authors introduced a nested Bayesian language model that calculates a probability of a word by hierarchical language models. Recently, (Kudo and Richardson, 2018) proposed a subword generator for NMT, which tokenizes a sentence stochastically with a subwordlevel language model while (Kudo, 2018) reports improvement in performance of NMT by the idea 1628 of sampling tokenization. Considering multiple subwords makes an NMT model robust against noise and segmentation errors. This differs from BPE in that it does not merge tokens uniquely by its frequency and differs from unsupervised word segmentation with a language model in that it limits subword vocabulary. Our work is similar to this line of research, but we focus on NLP tasks that do not require decoding such as text classification. The proposed model is different from this work in some respects: the vocabulary is not fixed, and the language model is updated by sampled tokenization. In this paper, we address the problem of tokenization for a neural network encoder by modifying a tokenization strategy. Another approach to address this problem alters the architecture of a neural network. For example, (Zhang and Yang, 2018) employs lattice LSTM, which considers all possible tokenizations of a sentence for named entity recognition. Lattice structured RNNs are also used for neural Chinese word segmentation such as (Chen et al., 2017) and (Yang et al., 2018), and they report improvement in performance. Our work is different from these works from the perspective that we address the problem focusing on the segmentation itself not the architecture of a neural network as well as (Kudo, 2018). We used a caching mechanism proposed to augment neural language models (Merity et al., 2016; Grave et al., 2016). This is also exploited for an open-vocabulary language model (Kawakami et al., 2017). (Cai et al., 2017) proposed a similar architecture to the caching mechanism for neural Chinese word segmentation. 7 Conclusion In this paper, we introduced stochastic tokenization for text classification with a neural network. Our model differs from previous methods in terms of sampling tokenization that considers all possible words under the maximum length limitation. To embed various tokens, we proposed the cache mechanism for frequent words. Our model also updates the language model depending on the sampled tokenizations in the training phase. With the updated language model, the proposed model can tokenize the test dataset considering recently used tokenization in the training phase. This results in improved performance for sentiment analysis tasks on Japanese and English datasets and Chinese datasets with a larger cache. We find that the proposed model of tokenization provides an improvement in the performance of text classification with a simple LSTM classifier. We expect our model contributes to improved performance of other complex state-of-the-art encoding architectures for text classification. Acknowledgments We are grateful to the members of the Computational Linguistics Laboratory, NAIST and the anonymous reviewers for their insightful comments. References Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, and Feiyue Huang. 2017. Fast and accurate neural word segmentation for chinese. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 608–615. Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang. 2017. Dag-based long short-term memory for neural word segmentation. arXiv preprint arXiv:1707.00248. Sharon Goldwater, Thomas L Griffiths, and Mark Johnson. 2006. Contextual dependencies in unsupervised word segmentation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 673–680. Association for Computational Linguistics. Sharon Goldwater, Thomas L Griffiths, and Mark Johnson. 2009. A bayesian framework for word segmentation: Exploring the effects of context. Cognition, 112(1):21–54. Edouard Grave, Armand Joulin, and Nicolas Usunier. 2016. Improving neural language models with a continuous cache. arXiv preprint arXiv:1612.04426. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1681–1691. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. volume 2, pages 427–431. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for 1629 modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 655–665. Kazuya Kawakami, Chris Dyer, and Phil Blunsom. 2017. Learning to create and reuse words in openvocabulary neural language modeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1492–1502. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Taku Kudo. 2006. Mecab: Yet another part-of-speech and morphological analyzer. http://taku910.github.io/mecab/. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75. Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In Twenty-ninth AAAI conference on artificial intelligence. Chenwei Liu, Jiawei Wang, and Kai Lei. 2016. Detecting spam comments posted in micro-blogs using the self-extensible spam dictionary. In 2016 IEEE International Conference on Communications (ICC), pages 1–7. IEEE. Lin Liu and Kun Jia. 2012. Detecting spam in chinese microblogs-a study on sina weibo. In 2012 Eighth International Conference on Computational Intelligence and Security, pages 578–581. IEEE. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843. Daichi Mochihashi, Takeshi Yamada, and Naonori Ueda. 2009. Bayesian unsupervised word segmentation with nested pitman-yor language modeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 100–108. Association for Computational Linguistics. Cicero dos Santos and Maira Gatti. 2014. Deep convolutional neural networks for sentiment analysis of short texts. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 69–78. Steven L Scott. 2002. Bayesian methods for hidden markov models: Recursive computing in the 21st century. Journal of the American Statistical Association, 97(457):337–351. Yohei Seki, David Kirk Evans, Lun-Wei Ku, HsinHsi Chen, Noriko Kando, and Chin-Yew Lin. 2007. Overview of opinion analysis pilot task at ntcir-6. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages P1715–1725. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentimentspecific word embedding for twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1555– 1565. Jin Wang, Liang-Chih Yu, K Robert Lai, and Xuejie Zhang. 2016a. Dimensional sentiment analysis using a regional cnn-lstm model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 225–230. Yequan Wang, Minlie Huang, Li Zhao, et al. 2016b. Attention-based lstm for aspect-level sentiment classification. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 606–615. Jie Yang, Yue Zhang, and Shuailong Liang. 2018. Subword encoding in lattice lstm for chinese word segmentation. arXiv preprint arXiv:1810.12594. Hua-Ping Zhang, Hong-Kui Yu, De-Yi Xiong, and Qun Liu. 2003. Hhmm-based chinese lexical analyzer ictclas. In Proceedings of the second SIGHAN workshop on Chinese language processing-Volume 17, pages 184–187. Association for Computational Linguistics. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649–657. Yue Zhang and Jie Yang. 2018. Chinese ner using lattice lstm. arXiv preprint arXiv:1805.02023. Yujun Zhou, Bo Xu, Jiaming Xu, Lei Yang, and Changliang Li. 2016. Compositional recurrent neural networks for chinese short text classification. In Web Intelligence (WI), 2016 IEEE/WIC/ACM International Conference on, pages 137–144. IEEE.
2019
158
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1630–1640 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1630 Mitigating Gender Bias in Natural Language Processing: Literature Review Tony Sun*†, Andrew Gaut*†, Shirlyn Tang†, Yuxin Huang†, Mai ElSherief†, Jieyu Zhao‡, Diba Mirza†, Elizabeth Belding†, Kai-Wei Chang‡, and William Yang Wang† †Department of Computer Science, UC Santa Barbara ‡Department of Computer Science, UC Los Angeles {tonysun, ajg, shirlyntang, yuxinhuang}@ucsb.edu {mayelsherif, dimirza, ebelding, william}@cs.ucsb.edu {jyzhao, kwchang}@cs.ucla.edu Abstract As Natural Language Processing (NLP) and Machine Learning (ML) tools rise in popularity, it becomes increasingly vital to recognize the role they play in shaping societal biases and stereotypes. Although NLP models have shown success in modeling various applications, they propagate and may even amplify gender bias found in text corpora. While the study of bias in artificial intelligence is not new, methods to mitigate gender bias in NLP are relatively nascent. In this paper, we review contemporary studies on recognizing and mitigating gender bias in NLP. We discuss gender bias based on four forms of representation bias and analyze methods recognizing gender bias. Furthermore, we discuss the advantages and drawbacks of existing gender debiasing methods. Finally, we discuss future studies for recognizing and mitigating gender bias in NLP. 1 Introduction Gender bias is the preference or prejudice toward one gender over the other (Moss-Racusin et al., 2012). Gender bias is exhibited in multiple parts of a Natural Language Processing (NLP) system, including the training data, resources, pretrained models (e.g. word embeddings), and algorithms themselves (Zhao et al., 2018a; Bolukbasi et al., 2016; Caliskan et al., 2017; Garg et al., 2018). NLP systems containing bias in any of these parts can produce gender biased predictions and sometimes even amplify biases present in the training sets (Zhao et al., 2017). The propagation of gender bias in NLP algorithms poses the danger of reinforcing damaging * Equal Contribution. Figure 1: Observation and evaluation of gender bias in NLP. Bias observation occurs in both the training sets and the test sets specifically for evaluating the gender bias of a given algorithm’s predictions. Debiasing gender occurs in both the training set and within the algorithm itself. stereotypes in downstream applications. This has real-world consequences; for example, concerns have been raised about automatic resume filtering systems giving preference to male applicants when the only distinguishing factor is the applicants’ gender. One way to categorize bias is in terms of allocation and representation bias (Crawford, 2017). Allocation bias can be framed as an economic issue in which a system unfairly allocates resources to certain groups over others, while representation bias occurs when systems detract from the social identity and representation of certain groups (Crawford, 2017). In terms of NLP applications, allocation bias is reflected when models often perform better on data associated with majority gender, and representation bias is reflected when associations between gender with certain concepts are captured in word embedding and model parameters. In Table 1, we categorize common examples of gender bias in NLP following Crawford (2017). 1631 Task Example of Representation Bias in the Context of Gender D S R U Machine Translation Translating “He is a nurse. She is a doctor.” to Hungarian and back to English results in “She is a nurse. He is a doctor.” (Douglas, 2017) ✓ ✓ Caption Generation An image captioning model incorrectly predicts the agent to be male because there is a computer nearby (Burns et al., 2018). ✓ ✓ Speech Recognition Automatic speech detection works better with male voices than female voices (Tatman, 2017). ✓ ✓ Sentiment Analysis Sentiment Analysis Systems rank sentences containing female noun phrases to be indicative of anger more often than sentences containing male noun phrases (Park et al., 2018). ✓ Language Model “He is doctor” has a higher conditional likelihood than “She is doctor” (Lu et al., 2018). ✓ ✓ ✓ Word Embedding Analogies such as “man : woman :: computer programmer : homemaker” are automatically generated by models trained on biased word embeddings (Bolukbasi et al., 2016). ✓ ✓ ✓ ✓ Table 1: Following the talk by Crawford (2017), we categorize representation bias in NLP tasks into the following four categories: (D)enigration, (S)tereotyping, (R)ecognition, (U)nder-representation. Briefly, denigration refers to the use of culturally or historically derogatory terms; stereotyping reinforces existing societal stereotypes; recognition bias involves a given algorithm’s inaccuracy in recognition tasks; and under-representation bias is the disproportionately low representation of a specific group. We identify that both allocative and representational harms often arise in NLP systems due to statistical patterns in the training corpora, which are then embedded in semantic representations and the model. Gender bias in NLP is a complex and compound issue, requiring interdisciplinary communication. As NLP systems have been increasingly integrated with our daily life thanks to modern AI developments, we need both immediate solutions to patch current systems as well as fundamental approaches to debias. In this paper, we provide a comprehensive literature review to summarize recent attempts for recognizing and mitigating bias in NLP systems. Most debiasing methods can be depicted as a special case in Figure 1. We make two primary contributions. (1) We summarize recent studies of algorithmic bias in NLP under a unified framework for the ease of future discussion. (2) We critically discuss issues with current debiasing methods with the purpose of identifying optimizations, knowledge gaps, and directions for future research. 2 Observing Gender Bias Recent work in analyzing gender bias in NLP has focused on quantifying bias through psychological tests, performance differences between genders for various tasks, and the geometry of vector spaces. We provide an overview of gender bias evaluation methods and discuss types of representation bias each method identifies. 2.1 Adopting Psychological Tests In psychology, the Implicit Association Test (IAT) is used to measure subconscious gender bias in humans, which can be quantified as the difference in time and accuracy for humans to categorize words as relating to two concepts they find similar versus two concepts they find different (Greenwald et al., 1998; Caliskan et al., 2017). For instance, to measure subconscious associations of genders with arts and sciences, participants are asked to categorize words as pertaining to (males or the sciences) or (females or the arts) (Nosek et al., 2009). The participants are then asked to categorize words as pertaining to (males or the arts) or (females or the sciences). If participants answered faster and more accurately in the former setting, it indicates that humans subconsciously associate males with the sciences and females with the arts. Caliskan et al. (2017) adopt the IAT’s core concept, measuring gender bias through the difference in strength of association of concepts, to measure bias in word embeddings using the Word Embedding Association Test (WEAT) (Caliskan et al., 2017). The authors confirm that human biases found through IAT tests exist in GloVe and Word2Vec embeddings. Finally, the authors demonstrate a positive correlation between the strength of association of an occupation word embedding with the female gender and the percentage of females in that occupation in United States, with the percentages taken from Bureau of Labor Statistics labor force participation data. Notably, Garg et al. (2018) show that bias in word 1632 embeddings can be used to track social changes such as increased or decreased female participation in the workforce. May et al. (2019) extend WEAT to create the Sentence Encoder Association Test (SEAT), capable of testing sentence encoders (e.g., ELMo (Peters et al., 2018)) for human biases found in IAT tests. 2.2 Analyzing Gender Sub-space in Embeddings Bolukbasi et al. (2016) define gender bias as the correlation between the magnitude of the projection onto the gender subspace of a word embedding representing a gender-neutral word and that word’s bias rating, as rated by crowd workers. To identify the gender subspace, they first build a linear support vector machine to classify words into a set of gender-specific and a set of gender-neutral words based on a training set of hand-selected gender-specific words. The authors then identify a gender direction by aggregating ten gender pairs (e.g. she-he, her-his, woman-man, etc.) and using principal component analysis to find a single eigenvector that exhibits significantly greater variance than the rest. Manzini et al. (2019) extend this method and their approach can be used to find non-binary gender bias by aggregating n-tuples instead of gender pairs. However, Gonen and Goldberg (2019) note that the above method fails to capture the full picture of gender bias in vector spaces. Specifically, even after the projections of word embeddings representing gender-neutral words onto the gender subspace have been removed, word embeddings representing words with similar biases still cluster together. They further introduce the notion of cluster bias. Cluster bias of a word w can be measured as the percentage of male or female stereotypical words among the k nearest neighbors of w’s embedding where the male or female stereotypical words are obtained through human annotation. 2.3 Measuring Performance Differences Across Genders In most NLP tasks, a model’s prediction should not be heavily influenced by the gender of the entity mentions or contexts in the input. To evaluate whether or not this is the case, consider two sentences that act as the inputs to a model for which the only differences are the words that correspond to gender, such as “He went to the park” vs “She went to the park”. We refer to changing the gender of the gendered nouns as gender-swapping. Gender-swapping can be generalized to sentences by swapping each male-definitional word with its respective female equivalent and vice-versa (Zhao et al., 2018a; Lu et al., 2018; Kiritchenko and Mohammad, 2018). If the model does not make decisions based on genders, it should perform equally for both sentences. Otherwise, the difference in evaluation scores reflects the extent of gender bias found in the system. For example, Dixon et al. (2017) introduce two metrics to measure these performance differences – False Positive Equality Difference (FPED) and False Negative Equality Difference (FNED) – that have been used to measure gender bias in abusive language detection (Park et al., 2018). These are defined as the differences in the false positive and false negative rates, respectively, of predictions of a model between original and gender-swapped inputs. We note that these measurements can generalize to tasks aside from abusive language detection. By designing test sets, measuring performance differences between genders reveals representational gender bias in the context of recognition, stereotyping, and under-representation. If, for instance, an image captioning model is worse at recognizing a woman than a man when they are each sitting in front of a computer (Burns et al., 2018), that is a clear indicator of recognition bias. If this prediction inaccuracy arises as a consequence of the algorithm’s association between man and computer, then this example also reveals stereotyping in the image captioning model. One can also imagine that if the model is not debiased and these errors propagate over a large sample of images, then the model may further contribute to the under-representation of minority. Standard evaluation data sets in NLP are inadequate for measuring gender bias. For one, these data sets often also contain biases (such as containing more male references than female references), so evaluation on them might not reveal gender bias. Furthermore, predictions made by systems performing complex NLP tasks depend on many factors; we must carefully design data sets to isolate the effect of gender of the output in order to be able to probe gender bias. We name these data sets Gender Bias Evaluation Testsets (GBETs). The goal of designing GBETs is to provide 1633 Data Set Task Probing Concept Size Winogender Schemas (Rudinger et al., 2018) Coreference Resolution Occupation 720 English Sentences WinoBias (Zhao et al., 2018a) Coreference Resolution Occupation 3,160 English Sentences GAP (Webster et al., 2018) Coreference Resolution Names 4,454 English Contexts EEC (Kiritchenko and Mohammad, 2018) Sentiment Analysis Emotion 8,640 English Sentences Table 2: Summary of GBETs. GBETs evaluate models trained for specific tasks for gender bias. GBETs use differences in values of the probing concept or prediction accuracies relating to the probing concept between gender-swapped data points to measure bias. check that NLP systems avoid making mistakes due to gender bias. Some may argue that the artificial design of GBETs does not reflect the true distribution of the data, implying that these evaluations are artificial. We argue that if humans can avoid making mistakes due to gender bias, then machines should as well. Additionally, systems that make biased predictions may discourage minorities from using those systems and having their data collected, thus worsening the disparity in the data sets (Hashimoto et al., 2018). We provide an overview of publicly available GBETs in Table 2. Gender-swapped GBETs: In the following, we review GBETs in coreference resolution and sentiment analysis applications. For coreference resolution, Rudinger et al. (2018) and Zhao et al. (2018b) independently designed GBETs based on Winograd Schemas. The corpus consists of sentences which contain a gender-neutral occupation (e.g., doctor), a secondary participant (e.g., patient), and a gendered pronoun that refers either the occupation or the participant. The coreference resolution system requires the identification of the antecedent of the pronoun. For each sentence, Rudinger et al. (2018) consider three types of pronouns (female, male, or neutral), and Zhao et al. (2018b) consider male and female pronouns. The two datasets have a few notable differences (see the discussion in (Rudinger et al., 2018)). Note that simply measuring a global difference in accuracies of a model between inputs with different gendered pronouns is insufficient. For example, a model could predict females and males to be coreferent to “secretary” with 60% and 20% accuracy, respectively. If that same model predicts females and males coreferent to “doctor” with 20% and 60% accuracy, respectively, then the global average accuracy for each gender is equivalent, yet the model exhibits bias.1 Therefore, Zhao 1For the sake of simplicity, we illustrate the motivation in accuracy. The coreference resolution systems may be evaluated using a different metric. et al. (2018b) and Rudinger et al. (2018) design metrics to analyze gender bias by examining how the performance difference between genders with respect to each occupation correlate with the occupational gender statistics from the U.S Bureau of Labor Statistics. Another GBET for coreference resolution named GAP contains sentences mined from Wikipedia and thus can perform an evaluation with sentences taken from real contexts as opposed to artificially generated ones (Webster et al., 2018). GAP does not include stereotypical nouns; instead, pronouns refer to names only. Gender bias can be measured as the ratio of F1 scores on inputs for which the pronoun is female to inputs for which the pronoun is male. Notably, sentences are not gender-swapped, so there may be differences in difficulty between sentences in male and female test sets. For sentiment analysis, a GBET dataset named Equity Evaluation Corpus (EEC) (Kiritchenko and Mohammad, 2018) is designed. Each EEC sentence contains an emotional word (e.g., anger, fear, joy, sadness), with one of five intensities for each emotion and a gender-specific word. Gender bias is measured as the difference in emotional intensity predictions between genderswapped sentences. 3 Debiasing Methods Using Data Manipulation Several approaches have been proposed for debiasing gender stereotypes in NLP by working on two tangents: (1) text corpora and their representations and (2) prediction algorithms. In this section, we will discuss the techniques to debias text corpora and word embeddings. We do the same for techniques to mitigate gender bias in algorithms in Section 4. We note that debiasing methods can be categorized as retraining and inference (see Table 3). Retraining methods require that the model is trained 1634 again, while inference methods reduce bias without requiring the existence of the original training set. Retraining methods tend to address gender bias in its early stages or even at its source. However, retraining a model on a new data set can be costly in terms of resources and time. Inference methods, on the other hand, do not require models to be retrained; instead, they patch existing models to adjust their outputs providing a testing-time debiasing. We will discuss different debiasing methods from these two perspectives. 3.1 Debiasing Training Corpora We review three approaches for debiasing gender in the literature. 3.1.1 Data Augmentation Oftentimes a data set has a disproportionate number of references to one gender (e.g. OntoNotes 5.0) (Zhao et al., 2018a). To mitigate this, Zhao et al. (2018a) proposed to create an augmented data set identical to the original data set but biased towards the opposite gender and to train on the union of the original and data-swapped sets. The augmented data set is created using gender-swapping. This is similar to the method used to create GBETs; however, the goal of data augmentation is to debias predictions by training the model on a gender-balanced data set, while GBETs are created specifically to evaluate the gender bias of those predictions both before and after debiasing. Data augmentation works as follows: for every sentence in the original data set, create that sentence’s gender-swapped equivalent using the procedure described in 2.3. Next, apply nameanonymization to every original sentence and its gender-swapped equivalent. Name anonymization consists of replacing all named entities with anonymized entities, such as “E1”. For instance, Mary likes her mother Jan becomes E1 likes his father E2 after applying gender-swapping and name anonymization for data augmentation. This removes gender associations with named entities in sentences. The model is then trained on the union of the original data set with name-anonymization and the augmented data set. The identification of gender-specific words and their equivalent opposite gender word requires lists typically created by crowd workers. Data augmentation has been shown to be flexible; it can mitigate gender bias in several differMethods Method Type Data Augmentation by Gender-Swapping Retraining Gender Tagging Retraining Bias Fine-Tuning Retraining Hard Debiasing Inference Learning Gender-Neutral Embeddings Retraining Constraining Predictions Inference Adjusting Adversarial Discriminator Retraining Table 3: Debiasing methods can be categorized according to how they affect the model. Some debiasing methods require the model to be retrained after debiasing (Retraining). Others modify existing models’ predictions or representations (Inference). ent models in many different tasks. When applied to a neural network based coreference resolution model (Lee et al., 2017, 2018) originally trained on OntoNotes 5.0 which was tested on WinoBias, gender augmentation lowered the difference between F1 scores on pro-stereotypical and antistereotypical test sets significantly, which indicates the model was less inclined to make genderbiased predictions (Zhao et al., 2018a, 2019). In hate speech detection, data augmentation reduced FNED and FPED differences between male and female predictions of a Convolutional Neural Network by a wide margin (Park et al., 2018). Data augmentation without name-anonymization has also been used to debias knowledge graphs built from Bollywood movie scripts (Madaan et al., 2018) by swapping the nodes for the lead actor and actress, but metrics evaluating the success of gender-swapping were not provided. Data augmentation is easy to implement, but creating the annotated list can be expensive if there is high variability in the data or if the data set is large since more annotations will be required. Furthermore, data augmentation doubles the size of the training set, which can increase training time by a factor specific to the task at hand. Lastly, blindly gender-swapping can create nonsensical sentences – for example, gender-swapping “she gave birth” to “he gave birth” (Madaan et al., 2018). 3.1.2 Gender Tagging In some tasks, like Machine Translation (MT), confounding the gender of the source of a data point can lead to inaccurate predictions. Current MT models predict the source to be male a disproportionate amount of time (Prates et al., 2018; Vanmassenhove et al., 2018). This happens because training sets are dominated by male-sourced 1635 data points, so the models learn skewed statistical relationships and are thus more likely to predict the speaker to be male when the gender of the source is ambiguous (Vanmassenhove et al., 2018). Gender tagging mitigates this by adding a tag indicating the gender of the source of the data point to the beginning of every data point. For instance, “I’m happy” would change to “MALE I’m happy.” In theory, encoding gender information in sentences could improve translations in which the gender of the speaker affects the translation (i.e. “I am happy” could translate to “Je suis heureux” [M] or “Je suis heureuse” [F]), since English does not mark the gender of the speaker in this case. The tag is then parsed separately from the rest of the data by the model. The goal is to preserve the gender of the source so the model can create more accurate translations (Vanmassenhove et al., 2018). Gender tagging is effective: a Sequence-toSequence Neural Network trained on Europarl increased BLEU scores significantly for machine translations from English to French in which the first-person speaker was female (Vanmassenhove et al., 2018). Sentences with male first-person speakers had accuracy increases by a sizeable margin. However, gender-tagging can be expensive: knowing the gender of the source of a data point requires meta-information, and obtaining this could be costly in terms of memory usage and time. Furthermore, MT models may need to be redesigned to correctly parse the gender tags. 3.1.3 Bias Fine-Tuning Unbiased data sets for a given task may be scarce, but there may exist unbiased data sets for a related task. Bias fine-tuning incorporates transfer learning from an unbiased data set to ensure that a model contains minimal bias before fine-tuning the model on a more biased data set used to train for the target task directly (Park et al., 2018). This allows models to avoid learning biases from training sets while still being adequately trained to perform a task. Bias fine-tuning has been shown to be relatively effective. Park et al. (2018) use transfer learning from a gender unbiased abusive tweets data set (Founta et al., 2018) and fine-tuning on a genderbiased sexist tweets data set (Waseem and Hovy, 2016) to train a Convolutional Neural Network (CNN). They evaluate the CNN using a GBET evaluation set (which is private, so not mentioned in 2.3). They tested the same model after training it on gender-swapped data sets as well. Park et al. (2018) find that gender-swapping was more effective at both removing bias and retaining performance than bias fine-tuning. However, transfer learning may have been ineffective in this case because abusive language detection data sets and sexist language detection data sets have significant differences. For more similar data sets, bias finetuning may be more effective; further testing is necessary. 3.2 Debiasing Gender in Word Embeddings Word embeddings represent words in a vector space. These embeddings have been demonstrated to reflect societal biases and changing views during social movements in the United States (Garg et al., 2018). As the word embedding model is a fundamental component in many NLP systems, mitigating bias in embeddings plays a key role in the reduction of bias that is propagated to downstream tasks (e.g., (Zhao et al., 2018a)). However, it is debatable if debiasing word embeddings is a philosophically right step towards mitigating bias in NLP. Caliskan et al. (2017) argue that debiasing word embeddings blinds an AI agent’s perception rather than teaching it to perform fair actions. We refer readers to the discussion in (Caliskan et al., 2017). It is also important to recognize that removing gender bias from the embedding space entirely is difficult. While existing methods successfully mitigate bias with respect to projection onto the gender subspace in some degrees, Gonen and Goldberg (2019) show that gender bias based on more subtle metrics such as cluster bias still exist. In the following we review two families of approaches to debias gender in word embeddings. One difference between these two types of methods is that the former does not require retraining embeddings, whereas the latter does. 3.2.1 Removing Gender Subspace in Word Embeddings Schmidt (2015) first removed similarity to the gender subspace in word embeddings by building a genderless framework using cosine similarity and orthogonal vectors (Schmidt, 2015). Removing the gender component, though, pushes the word he to become the 6th closest word to she when it was the 1,826th closest previously. The genderless 1636 Figure 2: We project five word2vec embeddings onto the ‘he’ - ‘she’ direction before and after neutralizing the gender-neutral words maestro, instructor, and homemaker and equalizing the gender-specific pair businessman and businesswoman (Bolukbasi et al., 2018). For both x and y-axes, negative values represent male gender bias and positive values represent female gender bias. framework may be flawed because the semantic definition of a given word may be closely tied to its gender component. However, a case can also be made that a word’s gender component should play a key role in its semantic definition. We encourage future work to collaborate with social scientists for further discussion on this topic. Bolukbasi et al. (2016) build upon Schmidt (2015) and propose to surgerically alter the embedding space by removing the gender component only from gender-neutral words. Instead of removing gender altogether, debiasing involves making gender-neutral words orthogonal to the gender direction (see Figure 2). Ultimately, word embeddings with reduced bias performed just as well as unaltered embeddings on coherence and analogy-solving tasks (Bolukbasi et al., 2016). 3.2.2 Learning Gender-Neutral Word Embeddings Zhao et al. (2018b) propose a new method called GN-GloVe that does not use a classifier to create a set of gender-specific words. The authors train the word embeddings by isolating gender information in specific dimensions and maintaining gender-neutral information in the other dimensions. They do this by (1) minimizing the negative difference (i.e. maximizing the difference) between the gender dimension in male and female definitional word embeddings and (2) maximizing the difference between the gender direction and the other neutral dimensions in the word embeddings. This allows for greater flexibility; the gender dimensions can be used or neglected. Finally, we note that both aforementioned approaches (Bolukbasi et al., 2016; Zhao et al., 2018b) used to debias word embeddings may not work with embeddings in a non-Euclidean space, such as Poincare embeddings (Nickel and Kiela, 2017), because the notion of cosine similarity would no longer apply. Also, it is unclear if these approaches can be extended to other languages beyond English, especially for languages with grammatical genders. 4 Debiasing by Adjusting Algorithms Some gender debiasing methods in NLP adjust predictions in NLP systems. We call these algorithm adjustment methods. In this section, we discuss two such approaches. 4.1 Constraining Predictions Zhao et al. (2017) show that an NLP model risks amplifying bias by making predictions which exacerbate biases present in the training set. For instance, if 80% of coreferents of “secretary” are female in a training set and a model trained on that set predicts 90% of coreferents of “secretary” in a test set to be female, then that model amplifies bias. Zhao et al. (2017) proposed Reducing Bias Amplification (RBA) based on a constrained conditional model (Roth and Yih, 2004), which takes an existing model’s optimization function and constrains that function to ensure its predictions fit defined conditions. For example, when RBA was applied to the visual semantic role labelling (Yatskar et al., 2016), it restricted the ratio of males to females predicted to be doing particular activities to prevent the model from amplifying bias through predictions. The approximate inference can be efficient solved by Lagrangian relaxation (Rush and Collins, 2012). 4.2 Adversarial Learning: Adjusting the Discriminator Zhang et al. (2018) propose a variation on the traditional generative adversarial network (Goodfellow et al., 2014) by having the generator learn with respect to a protected gender attribute. In other words, the generator attempts to prevent the discriminator from identifying the gender in a given task such as analogy completion. This method has 1637 the potential to be generalizable: it can be used to debias any model that uses gradient-based learning. 5 Conclusion and Future Directions In this paper, we summarize recent literature about recognizing and mitigating gender bias in NLP. We acknowledge that the scope of this paper is limited. There is a long history of gender stereotype study in law, psychology, media study, and many other disciplines which we do not discuss. Similar issues of algorithmic bias have also been discussed extensively in artificial intelligence, machine learning, data mining, and several other application fields (e.g., (Calders and Verwer, 2010; Feldman et al., 2015; Hardt et al., 2016; Misra et al., 2016; Kleinberg et al., 2016; Pleiss et al., 2017; Beutel et al., 2017; Misra et al., 2016)). Other important aspects such as model/data transparency (Mitchell et al., 2019; Bender and Friedman, 2018) and privacy preservation (Reddy and Knight, 2016; Elazar and Goldberg, 2018; Li et al., 2018) are also not covered in this literature survey. Besides, we refer the readers to Hovy and Spruit (2016) for a more general discussion of ethical concern in NLP. The study of gender bias in NLP is still relatively nascent and consequently lacks unified metrics and benchmarks for evaluation. We urge researchers in related fields to work together to create standardized metrics that rigorously measure the gender bias in NLP applications. However, we recognize that different applications may require different metrics and there are trade-offs between different notions of biases (Barocas et al., 2018; Chouldechova and Roth, 2018). Gender debiasing methods in NLP are not sufficient to debias models end-to-end for many applications. We note the following limitations of current approaches. First, the majority of debiasing techniques focus on a single, modular process of an end-to-end NLP system. It remains to be discovered how these individual parts harmonize together to form an ideally unbiased system. Second, most gender debiasing methods have only been empirically verified in limited applications (Zhang et al., 2018; Zhao et al., 2017), and it is not clear that these methods can generalize to other tasks or models. Third, we note that certain debiasing techniques may introduce noise into a NLP model, causing performance degradation. Finally, hand-craft debiasing approaches may unintentionally encode the implicit bias of the developers. Below, we identify a few future directions. Mitigating Gender Bias in Languages Beyond English. With few exceptions (Vanmassenhove et al., 2018; Prates et al., 2018), prior work has focused on mitigating gender bias in the English language. Future work can look to apply existing methods or devise new techniques towards mitigating gender bias in other languages as well. However, such a task is not trivial. Methods such as gender-swapping are relatively easy in English because English does not distinguish gender linguistically. However, in languages such as Spanish, each noun has its own gender and corresponding modifiers of the noun need to align with the gender of the noun. To perform gender-swapping in such languages, besides swapping those gendered nouns, we also need to change the modifiers. Non-Binary Gender Bias. With few exceptions (Manzini et al., 2019), work on debiasing in NLP has assumed that the protected attribute being discriminated against is binary. Non-binary genders (Richards et al., 2016) as well as racial biases have largely been ignored in NLP and should be considered in future work. Interdisciplinary Collaboration. As mentioned in Section 1, gender bias is not a problem that is unique to NLP; other fields in computer science such as data mining, machine learning, and security also study gender bias (Calders and Verwer, 2010; Feldman et al., 2015; Hardt et al., 2016; Misra et al., 2016; Kleinberg et al., 2016; Pleiss et al., 2017; Beutel et al., 2017; Kilbertus et al., 2017). Many of these technical methods could be applicable to NLP yet to our knowledge have not been studied. Additionally, mitigating gender bias in NLP is both a sociological and an engineering problem. To completely debias effectively, it is important to understand how machine learning methods encode biases and how humans perceive biases. A few interdisciplinary studies (Herbelot et al., 2012; Avin et al., 2015; Fu et al., 2016; Schluter, 2018) have emerged, and we urge more interdisciplinary discussions in terms of gender bias. Approaches from other technical fields may improve current debiasing methods in NLP or inspire the development of new, more effective methods even if the properties of the data or 1638 problem are different across fields. Discussions between computer scientists and sociologists may improve understanding of latent gender bias found in machine learning data sets and model predictions. 6 Acknowledgements We thank anonymous reviewers for their helpful feedback. We also acknowledge the thoughtful talks in related topics by Kate Crawford, Margaret Mitchell, Joanna J. Bryson, and several others. This material is based upon work supported in part by the National Science Foundation under Grants 1821415 and 1760523. References Chen Avin, Barbara Keller, Zvi Lotker, Claire Mathieu, David Peleg, and Yvonne-Anne Pignolet. 2015. Homophily and the Glass Ceiling Effect in Social Networks. In Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science (ITCS‘15), pages 41–50. ACM. Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2018. Fairness and Machine Learning. fairmlbook.org. http://www.fairmlbook.org. Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587–604. Alex Beutel, Jilin Chen, Zhe Zhao, and Ed Huai hsin Chi. 2017. Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations. In 2017 Workshop on Fairness, Accountability, and Transparency in Machine Learning. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man Is to Computer Programmer As Woman Is to Homemaker? Debiasing Word Embeddings. In Neural Information Processing Systems (NIPS‘16). Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2018. Debiaswe. https://bit.ly/2ruopBZ. Accessed on 12.10.2018. Kaylee Burns, Lisa Anne Hendricks, Trevor Darrell, Anna Rohrbach, and Kate Saenko. 2018. Women Also Snowboard: Overcoming Bias in Captioning Models. European Conference on Computer Vision (EECV’18). Toon Calders and Sicco Verwer. 2010. Three Naive Bayes Approaches for Discrimination-Free Classification. Data Mining and Knowledge Discovery, 21(2):277–292. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics Derived Automatically from Language Corpora Contain Human-Like Biases. Science, 356(6334):183–186. Alexandra Chouldechova and Aaron Roth. 2018. The Frontiers of Fairness in Machine Learning. arXiv preprint arXiv:1810.08810. Kate Crawford. 2017. The Trouble With Bias. Keynote at Neural Information Processing Systems (NIPS‘17). Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2017. Measuring and Mitigating Unintended Bias in Text Classification. In Association for the Advancement of Artificial Intelligence (AAAI’17). Laura Douglas. 2017. AI is not Just Learning our Biases; It Is Amplifying Them. https://bit.ly/ 2zRvGhH. Accessed on 11.15.2018. Yanai Elazar and Yoav Goldberg. 2018. Adversarial Removal of Demographic Attributes from Text Data. In Empirical Methods of Natural Language Processing (EMNLP‘18). Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and Removing Disparate Impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD‘15). Antigoni-Maria Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large Scale Crowdsourcing and Characterization of Twitter Abusive Behavior. In Association for the Advancement of Artifical Intelligence (AAAI‘18). Liye Fu, Cristian Danescu-Niculescu-Mizil, and Lillian Lee. 2016. Tie-breaker: Using Language Models to Quantify Gender Bias in Sports Journalism. In Proceedings of the IJCAI workshop on NLP meets Journalism. Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word Embeddings Quantify 100 Years of Gender and Ethnic Stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635–E3644. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But Do Not Remove Them. In North American Chapter of the Association for Computational Linguistics (NAACL‘19). Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Advances in Neural Information Processing Systems (NIPS‘14). 1639 Anthony G Greenwald, Debbie E McGhee, and Jordan LK Schwartz. 1998. Measuring Individual Differences in Implicit Cognition: The Implicit Association Test. Journal of Personality and Social Psychology, 74(6):1464. Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems (NIPS‘16). Tatsunori B Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. 2018. Fairness Without Demographics in Repeated Loss Minimization. Aur´elie Herbelot, Eva Von Redecker, and Johanna M¨uller. 2012. Distributional Techniques for Philosophical Enquiry. In Proceedings of the 6th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pages 45– 54. Association for Computational Linguistics. Dirk Hovy and Shannon L Spruit. 2016. The Social Impact of Natural Language Processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL’16), volume 2, pages 591–598. Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Sch¨olkopf. 2017. Avoiding Discrimination Through Causal Reasoning. In Neural Information Processing Systems (NIPS‘17). Svetlana Kiritchenko and Saif M Mohammad. 2018. Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems. In 7th Joint Conference on Lexical and Computational Semantics (SEM‘18). Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent Trade-offs in the Fair Determination of Risk Scores. In Computing Research Repository (CoRR ‘16). Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-End Neural Coreference Resolution. In Empirical Methods of Natural Language Processing (EMNLP‘17). Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-Order Coreference Resolution with Coarseto-Fine Inference. In Empirical Methods of Natural Language Processing (EMNLP’18). Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards Robust and Privacy-Preserving Text Representations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL‘18), pages 1650–1659. Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. 2018. Gender Bias in Neural Natural Language Processing. Nishtha Madaan, Sameep Mehta, Taneea Agrawaal, Vrinda Malhotra, Aditi Aggarwal, Yatin Gupta, and Mayank Saxena. 2018. Analyze, Detect and Remove Gender Stereotyping from Bollywood Movies. In Conference on Fairness, Accountability and Transparency (FAT‘18), pages 92–105. Thomas Manzini, Yao Chong Lim, Yulia Tsvetkov, and Alan W Black. 2019. Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings. In North American Chapter of the Association for Computational Linguistics (NAACL‘19). Chandler May, Alex Wang, Shikha Bordia, Samuel R Bowman, and Rachel Rudinger. 2019. On Measuring Social Biases in Sentence Encoders. In North American Chapter of the Association for Computational Linguistics (NAACL‘19). Ishan Misra, C. Lawrence Zitnick, Margaret Mitchell, and Ross Girshick. 2016. Seeing Through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE ’16), pages 2930–2939. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 220–229. ACM. Corinne A Moss-Racusin, John F Dovidio, Victoria L Brescoll, Mark J Graham, and Jo Handelsman. 2012. Science Faculty’s Subtle Gender Biases Favor Male Students. Proceedings of the National Academy of Sciences, 109(41):16474–16479. Maximillian Nickel and Douwe Kiela. 2017. Poincar`e Embeddings for Learning Hierarchical Representations. In Advances in Neural Information Processing Systems (NIPS‘17), pages 6338–6347. Brian A Nosek, Frederick L Smyth, Natarajan Sriram, Nicole M Lindner, Thierry Devos, Alfonso Ayala, Yoav Bar-Anan, Robin Bergh, Huajian Cai, Karen Gonsalkorale, Selin Kesebir, Norbert Maliszewski, Felix Neto, Eero Olli, Jaihyun Park, Konrad Schnabel, Kimihiro Shiomura, Bogdan Tulbure, Reinout Wiers, Monika Somogyi, Nazar Akrami, Bo Ekehammar, Michelangelo Vianello, Mahzarin Banaji, and Anthony Greenwald. 2009. National Differences in Gender-Science Stereotypes Predict National Sex Differences in Science and Math Achievement. Proceedings of the National Academy of Sciences, 106(26):10593–10597. Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Reducing Gender Bias in Abusive Language Detection. In Empirical Methods of Natural Language Processing (EMNLP‘18). 1640 Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In North American Chapter of the Association for Computational Linguistics (NAACL‘18). Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. 2017. On Fairness and Calibration. In Advances in Neural Information Processing Systems (NIPS‘17), pages 5680–5689. Marcelo O. R. Prates, Pedro H. Avelar, and Lu´ıs C. Lamb. 2018. Assessing gender bias in machine translation: a case study with Google Translate. Neural Computing and Applications. Sravana Reddy and Kevin Knight. 2016. Obfuscating Gender in Social Media Writing. In Proceedings of the First Workshop on NLP and Computational Social Science, pages 17–26. Christina Richards, Walter Pierre Bouman, Leighton Seal, Meg John Barker, Timo O Nieder, and Guy T‘Sjoen. 2016. Non-Binary or Genderqueer Genders. International Review of Psychiatry (IRP‘16‘). Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender Bias in Coreference Resolution. In North American Chapter of the Association for Computational Linguistics (NAACL‘18). Alexander M Rush and Michael Collins. 2012. A Tutorial on Dual Decomposition and Lagrangian Relaxation for Inference in Natural Language Processing. Journal of Artificial Intelligence Research, 45:305– 362. Natalie Schluter. 2018. The Glass Ceiling in NLP. In Empirical Methods of Natural Language Processing (EMNLP‘18). Ben Schmidt. 2015. Rejecting the Gender Binary: A Vector-Space Operation. https://bit.ly/ 1OhXJM0. Accessed on 11.15.2018. Rachel Tatman. 2017. Gender and Dialect Bias in YouTube‘s Automatic Captions. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing (ACL‘17), pages 53–59. Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting Gender Right in Neural Machine Translation. In Empirical Methods of Natural Language Processing (EMNLP‘18). Zeerak Waseem and Dirk Hovy. 2016. Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter. In North American Chapter of the Association for Computational Linguistics (NAACL‘16). Kellie Webster, Marta Recasens, Vera Axelrod, and Jason Baldridge. 2018. Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns. In Transactions of the ACL (TACL‘18). Mark Yatskar, Luke Zettlemoyer, and Ali Farhadi. 2016. Situation Recognition: Visual Semantic Role Labeling for Image Understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE ‘16), pages 5534–5542. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating Unwanted Biases with Adversarial Learning. In AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES‘18). Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender Bias in Contextualized Word Embeddings. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus-Level Constraints. In Empirical Methods of Natural Language Processing (EMNLP‘17). Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. In North American Chapter of the Association for Computational Linguistics (NAACL‘18). Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and KaiWei Chang. 2018b. Learning Gender-Neutral Word Embeddings. In Empirical Methods of Natural Language Processing (EMNLP‘18).
2019
159
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 165–174 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 165 Reliability-aware Dynamic Feature Composition for Name Tagging Ying Lin1, Liyuan Liu2, Heng Ji1,2, Dong Yu3 and Jiawei Han2 1 Dept. of Computer Science, Rensselaer Polytechnic Institute, Troy, NY, USA 2 Dept. of Computer Science, University of Illinois at Urbana-Champaign Urbana, IL, USA {yinglin8,ll2,hengji,hanj}@illinois.edu 3 Tencent AI Lab, Bellevue, WA, USA [email protected] Abstract While word embeddings are widely used for a variety of tasks and substantially improve the performance, their quality is not consistent throughout the vocabulary due to the longtail distribution of word frequency. Without sufficient contexts, embeddings of rare words are usually less reliable than those of common words. However, current models typically trust all word embeddings equally regardless of their reliability and thus may introduce noise and hurt the performance. Since names often contain rare and unknown words, this problem is particularly critical for name tagging. In this paper, we propose a novel reliability-aware name tagging model to tackle this issue. We design a set of word frequencybased reliability signals to indicate the quality of each word embedding. Guided by the reliability signals, the model is able to dynamically select and compose features such as word embedding and character-level representation using gating mechanisms. For example, if an input word is rare, the model relies less on its word embedding and assigns higher weights to its character and contextual features. Experiments on OntoNotes 5.0 show that our model outperforms the baseline model, obtaining up to 6.2% absolute gain in F-score. In crossgenre experiments on six genres in OntoNotes, our model improves the performance for most genre pairs and achieves 2.3% absolute Fscore gain on average. 1 1 Introduction Serving as the basic unit of the model input, word embeddings form the foundation of various natural language processing techniques using deep neural networks. Embeddings can effectively encode semantic information and have proven successful in a wide range of tasks, such as sequence 1Code and resources for this paper: https://github. com/limteng-rpi/neural_name_tagging A MedChem spokesman said the products contribute about a third of MedChem's sales and 10% to 20% of its earnings MedChem spokesman said ... Word Embedding LSTM Encoder ORG O O CRF Context-only Features Rare word. Its embedding is unreliable. Rely more on surface and context clues. Common word. Its word embedding should be reliable. Reliability Signals Character-level Representation Gate Figure 1: A simplified illustration of the proposed model. We only show the backward part in the figure. labeling (Collobert et al., 2011; Chiu and Nichols, 2016; Ma and Hovy, 2016; Lample et al., 2016), text classification (Tang et al., 2014; Lai et al., 2015; Yang et al., 2016), and parsing (Chen and Manning, 2014; Dyer et al., 2015). Still, due to the long tail distribution, the quality of pre-trained word embeddings is usually inconsistent. Without sufficient contexts, the embeddings of rare words are less reliable and may introduce noise, as current models disregard their quality and consume them in the same way as well-trained embeddings for common words. This issue is particularly important for name tagging, the task of identifying and classifying names from unstructured texts, because names usually contain rare and unknown words, especially when we move to new domains, topics, and genres. By contrast, when encountering an unknown word, human readers usually seek other clues in the text. Similarly, when informed that an embed166 ding is noisy or uninformative, the model should rely more on other features. Therefore, we aim to make the model aware of the quality of input embeddings and guide the model to dynamically select and compose features using explicit reliability signals. For example, in Figure 1, since the model is informed of the relatively low quality of the word embedding of “MedChem”, which only occurs 8 times in the embedding training corpus, it assigns higher weights to other features such as its character-level representation and contextual features derived from its context words (e.g., “spokesman”). The basis of this dynamic composition mechanism is the reliability signals that inform the model of the quality of each word embedding. Specifically, we assume that if a word occurs more frequently, its word embedding will be more fully trained as it has richer contexts and its embedding is updated more often during training. Thus, we design a set of reliability signals based on word frequency in the embedding training corpus and name tagging training corpus. As Figure 1 shows, we use reliability signals to control feature composition at two levels in our model. At the word representation level, in addition to word embedding, we generate a characterlevel representation for each word from its compositional characters using convolutional neural networks (see Section 2.1). Such character-level representation is able to capture semantic and morphological information. For example, the character features extracted from “Med” and “Chem” may encode semantic properties related to medical and chemical industries. At the feature extraction level, we introduce context-only features that are derived only from the context and thus not subject to the quality of the current word representation. For rare words without reliable representations, the contexts may provide crucial information to determine whether they are part of names or not. For example, “spokesman”, “products”, and “sales” in the context can help the model identify “MedChem” as an organization name. Additionally, context-only features are generally more robust because most non-name tokens in the context are common words and unlikely to vary widely across topics and scenarios. To incorporate the character-level representation and contextonly features, we design new gating mechanisms to mix them with the word embedding and encoder output respectively. These reliability-aware gates learn to dynamically assign weights to various types of features to obtain an optimal mixture. Experiments on six genres in OntoNotes (see Section 3.1) show that our model outperforms the baseline model without the proposed dynamic feature composition mechanism. In the cross-genre experiments, our model improves the performance for most pairs and obtains 2.3% absolute gain in F-score on average. 2 Model In this section, we will elaborate each component of our model. In Section 2.1, we will describe the baseline model for name tagging. After that, we will introduce the frequency-based reliability signals in Section 2.2. In Section 2.3, We will elaborate how we guide gates to dynamically compose features at the word representation level and feature extraction level. 2.1 Baseline Model We adopt a state-of-the-art name tagging model LSTM-CNN (Long-short Term Memory - Convolutional Neural Network) (Chiu and Nichols, 2016) as our base model. In this architecture, the input sentence is represented as a sequence of vectors X = {x1, ..., xL}, where xi is the vector representation of the i-th word, and L is the length of the sequence. Generally, xi is a concatenation of word embedding and character-level representation generated with a group of convolutional neural networks (CNNs) with various filter sizes from compositional character embeddings of the word. Next, the sequence X is fed into a bi-directional Recurrent Neural Network (RNN) with Longshort Term Memory (LSTM) units (Hochreiter and Schmidhuber, 1997). The bi-directional LSTM network processes the sentence in a sequential manner and encodes both contextual and non-contextual features of each word xi into a hidden state hi, which is afterwards decoded by a linear layer into yi. Each component of yi represents the score for the corresponding name tag category. On top of the model, a CRF (Lafferty et al., 2001) layer is employed to capture the dependencies among predicted tags. Therefore, given an input sequence X and the output of the linear layer Y = {y1, ..., yL}, we define the score of a se167 quence of predictions ˆz = {ˆz1, ..., ˆzL} to be s(X, ˆz) = L+1 X i=1 Aˆzi−1,ˆzi + L X i=1 yi,ˆzi, where Aˆzi−1,ˆzi is the score of transitioning from tag ˆzi−1 to tag ˆzi, and yi,ˆzi is the component of yi that corresponds to tag ˆzi. Additionally, ˆz0 and ˆzL+1 are the <start> and <end> tags padded to the predictions. During training, we maximize the sentencelevel log-likelihood of the true tag path z given the input sequence as log p(z|X) = log  es(X,z) P ˆz∈Z es(X,ˆz)  = s(X, z) −log X ˆz∈Z es(X,ˆz), where Z is the set of all possible tag paths. Note that in addition to word embeddings and character-level representations, (Chiu and Nichols, 2016) uses additional features such as capitalization and lexicons, which are not included in our implementation. Other similar name tagging model will be discussed in Section 4. 2.2 Reliability Signals As the basis of the proposed dynamic feature composition mechanism, reliability signals aim to inform the model of the quality of input word embeddings. Due to the lack of evaluation methods that directly measure the reliability of a single word embedding (Bakarov, 2018), we design a set of reliability signals based on word frequency as follows: 1. Word frequency in the word embedding training corpus fe. Generally, if a word has more occurrences in the corpus, it will appear in more diverse contexts, and its word embedding will be updated more times. 2. Word frequency in the name tagging training set fn. By fine-tuning pre-trained word embeddings, the name tagging model can encode task-specific information (e.g., “department” is often part of an organization name) into embeddings of words in the name tagging training set and improve their quality. Because word frequency has a broad range of values, we normalize it with tanh (λf), where λ is set to 0.001 for fe and 0.01 for fn as the average word frequency is higher in the embedding training corpus. We do not use relative frequency because it turns low frequencies into very small numbers close to zero. Using tanh as the normalization function, the model can react more sensitively towards lower frequency values. In addition to the above numeric signals, we introduce binary signals to give the model more explicit clues of the rarity of each word. For example, because we filter out words occurring less than 5 times during word embedding training, the following binary signal can explicitly inform the model whether a word is out-of-vocabulary or not: b(fe, 5) = ( 1, if fe < 5 0, if fe ≥5 We heuristically set the thresholds to 5, 10, 100, 1000, and 10000 for fe and 5, 10, 50 for fn based on the average word frequency in both corpora. The reliability signals of each word are represented as a vector, of which each component is a certain numeric or binary signal. We apply a dropout layer (Srivastava et al., 2014) with probability 0.2 to the reliability signals. 2.3 Dynamic Feature Composition Word Representation Level It is a common practice in current name tagging models to utilize character-level representations to address the following limitations of word embeddings: 1. Word embeddings take words as atomic units and thus ignore useful subword information such as affixes; 2. Pre-trained word embddings are not available for unknown words, which are typically represented using a randomly initialized vector in current models. Unlike previous methods that generally use the character-level representation as an additional feature under the assumption that word- and character-level representations learn disjoint features, we split the character-level representation into two segments: the first segment serves as an alternative representation to encode the same semantic information as word embedding and is mixed with word embedding using gating mechanisms; the second segment is used as an additional feature to encode morphological information that cannot be captured by word embedding. As Figure 2 illustrates, given the i-th word in a sentence, xw i ∈Rdw denotes its word embedding, 168 xc i ∈Rdc denotes its character-level representation, and xr i ∈Rdr denotes the reliability signals. The character-level representation xc i consists of two subvectors: xc i = xca i ⊕xcc i , where ⊕is the concatenation operator, xca i ∈Rdw i acts as an alternative representation to word embedding, and xcc ∈Rdc−dw is concatenated as additional features. M e d C h e m Max Pooling Layer Convolution Layer Character Embeddings Fully Connected Layer Word Embedding xw xca xcc Reliability-aware Gates xr Reliability Signals xc Figure 2: Dynamic feature composition at the word representation level. In this example, because the word embedding of “MedChem” is not reliable and informative, the model should attend more to xca i . To enable the model to switch between both representations accordingly, we define a pair of reliability-aware gates gw i and gc i to filter xw i and xca i respectively. We refer to gw i as the word-level representation gate and gc i as the character-level representation gate. We calculate gw i as gw i = σ(W wxw i + W cxc i + W rxr i + b), where W w ∈Rdw×dw, W c ∈Rdw×dw, W r ∈ Rdw×dr, and b ∈Rdw are parameters of the gate. The character-level representation gate gc i is defined in the same way. Finally, the enhanced representation of the i-th word is given by xi = (gw i ◦xw i + gc i ◦xca i ) ⊕xcc i , where ◦denotes the Hadamard product. We separately calculate gw i and gc i instead of setting gc i = 1 −gw i because word- and characterlevel representations are not always exclusive. Feature Extraction Level Although character-level representations can encode semantic information in many cases, they cannot perfectly replace word embeddings. For example, in the following sentence: “How does a small town like Linpien come to be home to such a well-organized volunteer effort, and just how did the volunteers set about giving their town a make-over?” The surface information of “Linpien” does not provide sufficient clues to infer its meaning and determine whether it is a name. In this case, the model should seek other useful features from the context, such as “a small town like” in the sentence. However, in our pilot study on OntoNotes, we observe many instances where the model fails to recognize an unseen name even with obvious context clues, along with a huge performance gap in recall between seen (92-96%) and unseen (5373%) names. A possible reason is that the model can memorize some words without reliable representations in the training set instead of exploiting their contexts in order to reduce the training loss. As a solution to this issue, we encourage the model to leverage contextual features to reduce overfitting to seen names. Compared to names, the context usually consists of more common words. Therefore, contextual features should be more robust when we apply the model to new data. In LSTM, each hidden state hi is computed from the previous forward hidden state −→ h i−1, next backward hidden state ←− h i+1, and the current input xi. To obtain features that are independent of the current input and not affected by its quality, we define context-only features as oi = −→o i ⊕←−o i = F(−→ h i−1) ⊕F ′(←− h i+1), where F and F ′ are affine transformations followed by a non-linear function such that oi ∈ R2dh has the same dimensionality as hi. In order to find an optimal mixture of hi and oi according to the reliability of representations of the current word and its context words, we define two pairs of gates to control the composition: the forward gates −→g h i and −→g o i , and the backward gates ←−g h i and ←−g o i . Figure 3 illustrates how to obtain the forward context-only features −→o i and mix it with −→ h i using reliability-aware gates. All gates are computed in the same way. Take 169 xri-2 xri-1 small town like Linpien ... ... ... ... Context Reliability Signals It is an unknown word. Rely more on the context. Words in the left context window are common. Reliability-aware Gates Hidden States Forward LSTM Context-only Features NN oi hi hi-1 hi' xri xri-3 Reliability Signals Figure 3: Dynamic feature composition at the feature extraction level. We only show the forward model for the purposes of simplicity. the forward hidden state gate −→g h i as an example: −→g h i = σ(U h−→o i + U r(xr i ⊕... ⊕xr i−C) + b′), where −→g h i is parameterized by U h ∈Rdh×dh, U r ∈Rdh×dr, and b′ ∈Rdh. This gate is controlled by the previous forward context-only features −→ oi and reliability signals (xr i ⊕... ⊕xr i−C), where C is the context window size. By contrast, the backward gates ←−g h i and ←−g o i take as input the backward context-only features and reliability signals of the right context. With these gates, we incorporate the context-only features by h′ i = (−→g h i ◦−→ h i+−→g o i ◦−→o i)⊕(←−g h i ◦←− h i+←−g o i ◦←−o i) The enhanced hidden state h′ i is then decoded by a following linear layer as in the baseline model. 3 Experiment 3.1 Data Sets We conduct our experiments on OntoNotes 5.02 (Weischedel et al., 2013), the final release of the OntoNotes project because it includes six diverse text genres for us to evaluate the robustness of our approach as Table 1 shows. We adopt the following four common entity types that are also used in other data sets such as TAC-KBP (Ji et al., 2011): PER (person), ORG (organization), GPE (geo-political entity), and LOC (location). We pre-process the data with Pradhan 2https://catalog.ldc.upenn.edu/LDC2013T19 Code Genre Name #Sentences Train Dev Test bc Broadcast conversation 11,866 2,117 2,211 bn Broadcast news 10,683 1,295 1,357 mz Magazine 6,911 642 780 nw Newswire 33,908 5,771 2,197 tc Telephone conversation 11,162 1,634 1,366 wb Weblogs 7,592 1,634 1,366 Table 1: OntoNotes genres. et al.’s scripts3 and therefore follow their split of training, development, and test sets. We use the BIOES tag scheme to annotate tags. The S- prefix indicates a single-token name mention. Prefixes B-, I-, and E- mark the beginning, inside, and end of a multi-token name mention. A word that does not belong to any name mention is annotated as O. 3.2 Experimental Setup We use 100-dimensional word embeddings trained on English Wikipedia articles (2017-12-20 dump) with word2vec, and initialize character embeddings as 50-dimensional random vectors. The character-level convolutional networks have filters of width [2, 3, 4] of size 50. For the bidirectional LSTM layer, we use a hidden state size of 100. To reduce overfitting, we attach dropout layers (Srivastava et al., 2014) with probability 0.5 to the input and output of the LSTM layer. We use an Adam optimizer with batch size of 20, learning rate of 0.001 and linear learning rate decay. 3.3 Within-genre Results We use the LSTM-CNN model as our baseline in all experiments. We train and test models on each genre and compare the within-genre results in Table 2. We also merge all genres and show the overall scores in the last column. Overall, with reliability-aware dynamic feature composition, our model achieves up to 6.2% absolute F-score gain on separate genres. T-test results show that the differences are considered to be statistically significant (p < 0.05) to statistically highly significant (p < 0.001). In Figure 4, we visualize gates that control the mixture of hidden states and context-only features. Each block represents the average of output weights of a certain gate for the correspond3https://cemantix.org/data/ontonotes.html 170 bc bn mz nw tc wb all LSTM-CNN 83.5 89.9 86.6 92.8 65.4 79.4 90.1 Rei et al. (2016) 85.4 90.4 87.2 92.5 71.1 77.4 90.0 Our Model* 86.2 91.2 89.8 92.9 71.3 78.5 90.3 Our Model 86.4 91.4 90.0 93.0 71.6 79.1 90.6 Table 2: Performance on OntoNotes (F-score, %). Our Model* is a variant of our model that does not incorporate reliability signals. (Rei et al., 2016) uses a gate to control the mixture of character- and word-level representations. ing word. The results of hidden state gates −→g h and ←−g h show that for common words such as “a” and “to”, the model mainly relies on their original hidden states. By contrast, the context-only feature gates −→g o and ←−g o assign greater weights to the unknown word “Linpien”. Meanwhile, the model barely uses any context-only features for words following “Linpien” (“come” in the forward model and “like” in the backward model) to avoid using unreliable features derived from an unknown word. To our surprise, the model also emphasizes context-only features for the beginning and ending words. Their context-only features actually come from the zero vectors padded to the sequence during gate calculation. Our explanation is that these features may help the model distinguish the beginning and ending words that differ from other words in some aspects. For example, capitalization is usually an indicator of proper nouns for most words except for the first word of a sentence. How does a small town like Linpien come to be home to such a well organized volunteer effort . Forward Gate Backward Gate Forward Gate Backward Gate Context-only Feature Gates Hidden State Gates gh gh go go Figure 4: Visualization of reliability-aware gates. A darker color indicates a higher average weight. 3.4 Cross-genre Results Different genres in OntoNotes not only differ in style but also cover different topics and hence different names. As Table 3 shows, when tested on another genre, the model encounters a high percentage of names that are unseen in the training genre. For example, 81.3% names are unseen when we train a model on mz and test it on bc. Therefore, through cross-genre experiments, we can evaluate the generalization capability of the model. Train Test bc bn mz nw tc wb bc 36.3 53.4 73.2 68.9 81.4 51.5 bn 43.9 28.5 72.8 63.6 67.8 49.9 mz 81.3 79.8 41.1 82.1 88.1 86.4 nw 40.2 43.8 70.8 33.1 55.4 55.1 tc 82.4 83.2 93.4 87.0 67.8 79.0 wb 54.6 60.6 75.4 70.8 85.3 53.4 Table 3: High percentage of unseen names (%). Baseline Model Train Test bc bn mz nw tc wb bc 83.5 82.4 70.4 67.9 74.8 75.2 bn 83.5 89.9 78.7 75.6 76.8 77.1 mz 59.2 70.7 86.6 65.9 66.1 58.0 nw 82.4 85.4 72.6 92.8 74.4 76.7 tc 53.2 51.2 34.0 38.9 65.4 44.3 wb 71.5 78.1 67.5 66.6 70.1 79.4 Our Model Train Test bc bn mz nw tc wb bc 86.4 82.5 76.4 70.6 74.7 76.1 bn 84.8 91.4 78.7 79.2 76.5 76.1 mz 64.3 73.8 90.0 70.5 57.5 59.3 nw 81.5 86.1 74.0 93.0 74.9 78.3 tc 58.2 55.6 43.6 47.1 71.6 50.4 wb 76.3 78.4 70.5 69.6 72.3 79.1 Table 4: Cross-genre performance on OntoNotes (Fscore, %). In Table 4, we compare the cross-genre performance between the baseline and our model. For most cross-genre pairs, our model outperforms the baseline and obtains up to 9.6% absolute gains in F-score. With dynamic feature composition, the crossgenre performance of our model even exceeds the within-genre performance of the baseline model in some cases. For example, when trained on the bn portion and tested on bc, our model achieves 84.8% F-score, which is 1.3% higher than the 171 within-genre performance of the baseline model (83.5% F-score). Such generalization capability is important for real-word applications as it is infeasible to annotate training data for all possible scenarios. 3.5 Qualitative Analysis In Table 5, we show some typical name tagging errors corrected by our model. We highlight the difference between the outputs of the baseline model and our model in bold. We also underline words that probably have provided useful contextual clues. Identification Errors ⋆BASELINE: The 50-50 joint venture, which may be dubbed Eurodynamics , would have combined annual sales of at least #1.4 billion ($2.17 billion) and would be among the world’s largest missile makers. ⋆OUR MODEL: The 50-50 joint venture, which may be dubbed [ORG Eurodynamics] , would have combined annual sales of at least #1.4 billion ($2.17 billion) and would be among the world’s largest missile makers. ⋆BASELINE: The Tanshui of illustrations is a place of unblemished beauty, a myth that remains unshakeable. ⋆OUR MODEL: The [GPE Tanshui] of illustrations is a place of unblemished beauty, a myth that remains unshakeable. Classification Errors ⋆BASELINE: As [PER Syms] ’s “core business of off-price retailing grows, a small subsidiary that is operationally unrelated becomes a difficult distraction,” said [PER Marcy Syms], president of the parent, in a statement. ⋆OUR MODEL: As [ORG Syms] ’s “core business of off-price retailing grows, a small subsidiary that is operationally unrelated becomes a difficult distraction,” said [PER Marcy Syms], president of the parent, in a statement. ⋆BASELINE: Workers at plants in [GPE Van Nuys] , [GPE Calif.] , [GPE Oklahoma City] and [ORG Pontiac] , [GPE Mich.] , were told their facilities are no longer being considered to build the next generation of the [ORG Pontiac] Firebird and [ORG Chevrolet] Camaro muscle cars. ⋆OUR MODEL: Workers at plants in [GPE Van Nuys] , [GPE Calif.] , [GPE Oklahoma City] and [GPE Pontiac] , [GPE Mich.] , were told their facilities are no longer being considered to build the next generation of the [ORG Pontiac] Firebird and [ORG Chevrolet] Camaro muscle cars. Table 5: Name tagging result comparison between the baseline model and our model. Character-level representations are particularly effective for words containing morphemes that are related to a certain type of names. For example, “Eurodynamics” in the first sentence consists of “Euro-” and “dynamic”. The prefix “Euro-” often appears in European organization names such as “EuroDisney” (an entertainment resort) and “EuroAtlantic” (an airline), while “dynamic” is used in some company names such as Boston dynamics (a robotics company) and Beyerdynamic (an audio equipment manufacturer). Therefore, “Eurodynamics” is likely to be an organization rather than a person or location. However, for words like “Tanshui” (a town) in the second example, character-level representations may not provide much useful semantic information. In this case, contextual features (“is a place”) play an important role in determining the type of this name. Contextual features can be critical even for frequent names such as “Jordan” (can be a person or a country) and “Thomson” (can be various types of entities, including person, organization, city, and river). Take the third sentence in Table 5 as an example. The name “Syms” appears twice in the sentence, referring to the Syms Corp and Marcy Syms respectively. As they share the same wordand character-level representations, context clues such as “core business” and “president” are crucial to distinguish them. Similarly, “Pontiac” in the last example can be either a city or a car brand. Cities in its context (e.g., “Van Nuys, Calif”, “Oklahoma City”) help the model determine that the first “Pontiac” is more likely to be a GPE instead of an ORG. Still, the contextual information utilized by the current model is not profound enough, and our model is not capable of conducting deep reasoning as human readers. For example, in the following sentence: “In the middle of the 17th century the Ming dynasty loyalist Zheng Chenggong (also known as Koxinga) brought an influx of settlers to Taiwan from the Fujian and Guangdong regions of China.” Although our model successfully identifies “Zheng Chenggong” as a person, it is not able to connect this name with “Koxinga” based on the expression “also known as” to further infer that “Koxinga” should also be a person. 172 4 Related Work Name Tagging Models Most existing methods treat name tagging as a sequence labeling task. Traditional methods leverage handcrafted features to capture textual signals and employ conditional random fields (CRF) to model label dependencies (Finkel et al., 2005; Settles, 2004; Leaman et al., 2008). Bi-LSTM-CRF (Huang et al., 2015) combines word embedding and handcrafted features, integrates neural networks with CRF, and shows performance boost over previous methods. LSTMCNN further utilizes CNN and illustrates the potential of capturing character-level signals (Chiu and Nichols, 2016). LSTM-CRF and LSTMCNNs-CRF are proposed to get rid of hand-crafted features and demonstrate the feasibility to fully rely on representation learning to capture textual features (Lample et al., 2016; Ma and Hovy, 2016; Liu et al., 2018b). Recently, language modeling methods are proven effective as the representation module for name tagging (Liu et al., 2018a; Peters et al., 2018; Akbik et al., 2018). At the same time, there has been extensive research about cross-genre (Peng and Dredze, 2017), crossdomain (Pan et al., 2013; He and Sun, 2017), cross-time (Mota and Grishman, 2008), crosstask (Søgaard and Goldberg, 2016; Liu et al., 2018b), and cross-lingual (Yang et al., 2017; Lin et al., 2018) adaptation for name tagging training. Unlike these models, although we also aim to enhance the performance on new data, we achieve this by improving the generalization capability of the model so that it can work better on unknown new data instead of transferring it to a known target setting. Word Representation Models Recent advances on representation learning allow us to capture textual signals in a data-driven manner. Based on the distributional hypothesis (i.e., “a word is characterized by the company it keeps” (Harris, 1954)), embedding methods represent each word as a dense vector, while preserving their syntactic and semantic information in a context-agnostic manner (Mikolov et al., 2013; Pennington et al., 2014). Recent work shows that word embeddings can cover textual information of various levels (Artetxe et al., 2018) and improve name tagging performance significantly (Cherry and Guo, 2015). Still, due to the long-tail distribution of word frequency, embedding vectors usually have inconsistent reliability, and such inconsistency has been long overlooked. Meanwhile, language models such as ELMo, Flair, and BERT have shown their effectiveness on constructing representations in a context-aware manner (Peters et al., 2018; Akbik et al., 2018; Devlin et al., 2018). These models are designed to better capture the context information by pre-training, while our model dynamically composes representations in a reliability-aware manner. Therefore, our model and these efforts have the potential to mutually enhance each other. In addition, (Kim et al., 2016) and (Rei et al., 2016) also mix word- and character-level representations using gating mechanisms. They use a single gate to balance the representations in a reliability-agnostic way. 5 Conclusions and Future Work We propose a name tagging model that is able to dynamically compose features depending on the quality of input word embeddings. Experiments on the benchmark data sets in both within-genre and cross-genre settings demonstrate the effectiveness of our model and verify our intuition to introduce reliability signals. Our future work includes integrating advanced word representation methods (e.g., ELMo and BERT) and extending the proposed model to other tasks, such as event extraction and co-reference resolution. We also plan to incorporate external knowledge and common sense as additional signals into our architecture as they are important for human readers to recognize names but still absent from the current model. Acknowledgments This work was supported by the U.S. DARPA AIDA Program No. FA8750-18-2-0014, LORELEI Program No. HR0011-15-C-0115, Air Force No. FA8650-17-C-7715, U.S. ARL NS-CTA No. W911NF-09-2-0053, and Tencent AI Lab Rhino-Bird Gift Fund. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. 173 References Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the International Conference on Computational Linguistics (COLING 2018). Mikel Artetxe, Gorka Labaka, Inigo Lopez-Gazpio, and Eneko Agirre. 2018. Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation. In Proceedings of the Conference on Computational Natural Language Learning (CoNLL 2018). Amir Bakarov. 2018. A survey of word embeddings evaluation methods. arXiv preprint arXiv:1801.09536. Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP 2014). Colin Cherry and Hongyu Guo. 2015. The unreasonable effectiveness of word representations for twitter named entity recognition. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2015). Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association of Computational Linguistics. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of The Annual Meeting of the Association for Computational Linguistics (ACL 2015). J. R. Finkel, T. Grenager, and C. Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In ACL. Zellig S Harris. 1954. Distributional structure. Word. Hangfeng He and Xu Sun. 2017. A unified model for cross-domain and semi-supervised named entity recognition in chinese social media. In Proceedings of AAAI Conference on Artificial Intelligence (AAAI 2017). Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Heng Ji, Ralph Grishman, and Hoa Trang Dang. 2011. An overview of the tac2011 knowledge base population track. In Proceedings of the Text Analysis Conference (TAC 2011). Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In AAAI Conference on Artificial Intelligence (AAAI 2016). John D. Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the International Conference on International Conference on Machine Learning (ICML 2001). Siwei Lai, Liheng Xu, Kang Liu, and Jian Zhao. 2015. Recurrent convolutional neural networks for text classification. In Proceedings of AAAI Conference on Artificial Intelligence (AAAI 2015). Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT 2016). Robert Leaman, Graciela Gonzalez, et al. 2008. Banner: an executable survey of advances in biomedical named entity recognition. In Pacific Symposium on Biocomputing. Ying Lin, Shengqi Yang, Veselin Stoyanov, and Heng Ji. 2018. A multi-lingual multi-task architecture for low-resource sequence labeling. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2018). Liyuan Liu, Xiang Ren, Jingbo Shang, Jian Peng, and Jiawei Han. 2018a. Efficient contextualized representation: Language model pruning for sequence labeling. In EMNLP. Liyuan Liu, Jingbo Shang, Xiang Ren, Frank Fangzheng Xu, Huan Gui, Jian Peng, and Jiawei Han. 2018b. Empower sequence labeling with task-aware neural language model. In AAAI Conference on Artificial Intelligence (AAAI 2018). Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of The Annual Meeting of the Association for Computational Linguistics. 174 Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Cristina Mota and Ralph Grishman. 2008. Is this NE tagger getting old? In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2008). Sinno Jialin Pan, Zhiqiang Toh, and Jian Su. 2013. Transfer joint embedding for cross-domain named entity recognition. ACM Transactions on Information Systems. Nanyun Peng and Mark Dredze. 2017. Multi-task domain adaptation for sequence tagging. In Proceedings of the 2nd Workshop on Representation Learning for NLP. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2014). Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT 2018). Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj¨orkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using ontonotes. In Proceedings of the Conference on Computational Natural Language Learning (CoNLL 2013). Marek Rei, Gamal Crichton, and Sampo Pyysalo. 2016. Attending to characters in neural sequence labeling models. In Proceedings of International Conference on Computational Linguistics (COLING 2016). Burr Settles. 2004. Biomedical named entity recognition using conditional random fields and rich feature sets. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and Its Applications. Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proceedings of The Annual Meeting of the Association for Computational Linguistics (ACL 2016). Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentimentspecific word embedding for twitter sentiment classification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2014). Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 LDC2013T19. Linguistic Data Consortium, Philadelphia, PA. Zhilin Yang, Ruslan Salakhutdinov, and William W Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. In Proceedings of International Conference on Learning Representations (ICLR 2017). Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2016).
2019
16
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1641–1650 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1641 Gender-preserving Debiasing for Pre-trained Word Embeddings Masahiro Kaneko Tokyo Metropolitan University, Japan [email protected] Danushka Bollegala University of Liverpool, UK [email protected] Abstract Word embeddings learnt from massive text collections have demonstrated significant levels of discriminative biases such as gender, racial or ethnic biases, which in turn bias the down-stream NLP applications that use those word embeddings. Taking gender-bias as a working example, we propose a debiasing method that preserves non-discriminative gender-related information, while removing stereotypical discriminative gender biases from pre-trained word embeddings. Specifically, we consider four types of information: feminine, masculine, gender-neutral and stereotypical, which represent the relationship between gender vs. bias, and propose a debiasing method that (a) preserves the genderrelated information in feminine and masculine words, (b) preserves the neutrality in genderneutral words, and (c) removes the biases from stereotypical words. Experimental results on several previously proposed benchmark datasets show that our proposed method can debias pre-trained word embeddings better than existing SoTA methods proposed for debiasing word embeddings while preserving gender-related but non-discriminative information. 1 Introduction Despite the impressive success stories behind word representation learning (Devlin et al., 2018; Peters et al., 2018; Pennington et al., 2014; Mikolov et al., 2013c,a), further investigations into the learnt representations have revealed several worrying issues. The semantic representations learnt, in particular from social media, have shown to encode significant levels of racist, offensive and discriminative language usage (Bolukbasi et al., 2016; Zhao et al., 2018b; Elazar and Goldberg, 2018; Rudinger et al., 2018; Zhao et al., 2018a). For example, Bolukbasi et al. (2016) showed that word representations learnt from a large (300GB) news corpus to amplify unfair gender biases. Microsoft’s AI chat bot Tay learnt abusive language from Twitter within the first 24 hours of its release, which forced Microsoft to shutdown the bot (The Telegraph, 2016). Caliskan et al. (2017) conducted an implicit association test (IAT) (Greenwald et al., 1998) using the cosine similarity measured from word representations, and showed that word representations computed from a large Web crawl contain human-like biases with respect to gender, profession and ethnicity. Given the broad applications of pre-trained word embeddings in various down-stream NLP tasks such as machine translation (Zou et al., 2013), sentiment analysis (Shi et al., 2018), dialogue generation (Zhang et al., 2018) etc., it is important to debias word embeddings before they are applied in NLP systems that interact with and/or make decisions that affect humans. We believe that no human should be discriminated on the basis of demographic attributes by an NLP system, and there exist clear legal (European Union, 1997), business and ethical obligations to make NLP systems unbiased (Holstein et al., 2018). Despite the growing need for unbiased word embeddings, debiasing pre-trained word embeddings is a challenging task that requires a fine balance between removing information related to discriminative biases, while retaining information that is necessary for the target NLP task. For example, profession-related nouns such as professor, doctor, programmer have shown to be stereotypically male-biased, whereas nurse, homemaker to be stereotypically female-biased, and a debiasing method must remove such biases. On the other hand, one would expect1, beard to be associated with male nouns and bikini to be associ1This indeed is the case for pre-trained GloVe embeddings 1642 ated with female nouns, and preserving such gender biases would be useful, for example, for a recommendation system (Garimella et al., 2017). As detailed later in section 2, existing debiasing methods can be seen as embedding word embeddings into a subspace that is approximately orthogonal to a gender subspace spanned by genderspecific word embeddings. Although unsupervised, weakly-supervised and adversarially trained models have been used for learning such embeddings, they primarily focus on the male-female gender direction and ignore the effect of words that have a gender orientation but not necessarily unfairly biased. To perform an extensive treatment of the gender debiasing problem, we split a given vocabulary V into four mutually exclusive sets of word categories: (a) words wf ∈Vf that are femalebiased but non-discriminative, (b) words wm ∈ Vm that are male-biased but non-discriminative, (c) words wn ∈Vn that are gender-neutral, and (d) words ws ∈Vs that are stereotypical (i.e., unfairly2 gender-biased). Given a large set of pretrained word embeddings and small seed example sets for each of those four categories, we learn an embedding that (i) preserves the feminine information for the words in Vf, (ii) preserves the masculine information for the words in Vm, (iii) protects the neutrality of the gender-neutral words in Vn, while (iv) removing the gender-related biases from stereotypical words in Vs. The embedding is learnt using an encoder in a denoising autoencoder, while the decoder is trained to reconstruct the original word embeddings from the debiased embeddings that do not contain unfair gender biases. The overall model is trained end-to-end to dynamically balance the competing criteria (i)(iv). We evaluate the bias and accuracy of the word embeddings debiased by the proposed method on multiple benchmark datasets. On the SemBias (Zhao et al., 2018b) gender relational analogy dataset, our proposed method outperforms previously proposed hard-debiasing (Bolukbasi et al., 2016) and gender-neural Global Vectors (GN-GloVe) (Zhao et al., 2018b) by correctly debiasing stereotypical analogies. Following prior work, we evaluate the loss of information due to debiasing on benchmark datasets for semantic 2We use the term unfair as used in fairness-aware machine learning. similarity and word analogy. Experimental results show that the proposed method can preserve the semantics of the original word embeddings, while removing gender biases. This shows that the debiased word embeddings can be used as drop-in replacements for word embeddings used in NLP applications. Moreover, experimental results show that our proposed method can also debias word embeddings that are already debiased using previously proposed debiasing methods such as GNGloVe to filter out any remaining gender biases, while preserving semantic information useful for downstream NLP applications. This enables us to use the proposed method in conjunction with existing debiasing methods. 2 Related Work To reduce the gender stereotypes embedded inside pre-trained word representations, Bolukbasi et al. (2016) proposed a post-processing approach that projects gender-neutral words to a subspace, which is orthogonal to the gender dimension defined by a list of gender-definitional words. They refer to words associated with gender (e.g., she, actor) as gender-definitional words, and the remainder gender-neutral. They proposed a harddebiasing method where the gender direction is computed as the vector difference between the embeddings of the corresponding genderdefinitional words, and a soft-debiasing method, which balances the objective of preserving the inner-products between the original word embeddings, while projecting the word embeddings into a subspace orthogonal to the gender definitional words. They use a seed set of gender-definitional words to train a support vector machine classifier, and use it to expand the initial set of genderdefinitional words. Both hard and soft debiasing methods ignore gender-definitional words during the subsequent debiasing process, and focus only on words that are not predicted as genderdefinitional by the classifier. Therefore, if the classifier erroneously predicts a stereotypical word as a gender-definitional word, it would not get debiased. Zhao et al. (2018b) proposed Gender-Neutral Global Vectors (GN-GloVe) by adding a constraint to the Global Vectors (GloVe) (Pennington et al., 2014) objective such that the gender-related information is confined to a sub-vector. During optimisation, the squared ℓ2 distance between gender1643 related sub-vectors are maximised, while simultaneously minimising the GloVe objective. GNGloVe learns gender-debiased word embeddings from scratch from a given corpus, and cannot be used to debias pre-trained word embeddings. Moreover, similar to hard and soft debiasing methods described above, GN-GloVe uses pre-defined lists of feminine, masculine and gender-neutral words and does not debias words in these lists. Debiasing can be seen as a problem of hiding information related to a protected attribute such as gender, for which adversarial learning methods (Xie et al., 2017; Elazar and Goldberg, 2018; Li et al., 2018) have been proposed in the fairnessaware machine learning community (Kamiran and Calders, 2009). In these approaches, inputs are first encoded, and then two classifiers are trained – a target task predictor that uses the encoded input to predict the target NLP task, and a protectedattribute predictor that uses the encoded input to predict the protected attribute. The two classifiers and the encoder is learnt jointly such that the accuracy of the target task predictor is maximised, while minimising the accuracy of the protectedattribute predictor. However, Elazar and Goldberg (2018) showed that although it is possible to obtain chance-level development-set accuracy for the protected attribute during training, a post-hoc classifier, trained on the encoded inputs can still manage to reach substantially high accuracies for the protected attributes. They conclude that adversarial learning alone does not guarantee invariant representations for the protected attributes. Gender biases have been identified in several tasks in NLP such as coreference (Rudinger et al., 2018; Zhao et al., 2018a) resolution and machine translation (Prates et al., 2018). For example, rule-based, feature-based as well as neural coreference resolution methods trained on biased resources have shown to reflect those biases in their predictions (Rudinger et al., 2018). Google Machine Translation, for example, provides male and female versions of the translations3, when the gender in the source language is ambiguous. 3 Gender-Preserving Debiasing 3.1 Formulation Given a pre-trained set of d-dimensional word embeddings {wi}|V| i=1, over a vocabulary V, we con3https://bit.ly/2B0nVHZ sider the problem of learning a map E : Rd →Rl that projects the original pre-trained word embeddings to a debiased l-dimensional space. We do not assume any knowledge about the word embedding learning algorithm that was used to produce the pre-trained word embeddings given to us. Moreover, we do not assume the availability or access to the language resources such as corpora or lexicons that might have been used by the word embedding learning algorithm. Decoupling the debiasing method from the word embedding learning algorithm and resources increases the applicability of the proposed method, enabling us to debias pre-trained word embeddings produced using different word embedding learning algorithms and using different types of resources. We propose a debiasing method that models the interaction between the values of the protected attribute (in the case of gender we consider male, female and neutral as possible attribute values), and whether there is a stereotypical bias or not. Given four sets of words: masculine (Vm), feminine (Vf), neutral (Vn) and stereotypical (Vs), our proposed method learns a projection that satisfies the following four criteria: (i) for wf ∈Vf, we protect its feminine properties, (ii) for wm ∈Vm, we protect its masculine properties, (iii) for wn ∈Vn, we protect its gender neutrality, and (iv) for ws ∈Vs, we remove its gender biases. By definition the four word categories are mutually exclusive and the total vocabulary is expressed by their disjunction V = Vm ∪Vf ∪Vn ∪Vs. A key feature of the proposed method that distinguishes it from prior work on debiasing word embeddings is its ability to differentiate between undesirable (stereotypical) biases from the desirable (expected) gender information in words. The procedure we follow to compile the four wordsets is described later in subsection 4.1, and the words that belong to each of the four categories are shown in the supplementary material. To explain the proposed gender debiasing method, let us first consider a feminine regressor Cf : Rl →[0, 1], parameterised by θf, that predicts the degree of feminineness of the word w. Here, highly feminine words are assigned values 1644 close to 1. Likewise, let us consider a masculine regressor Cm : Rl →[0, 1], parametrised by θm, that predicts the degree of masculinity of w. We then learn the debiasing function as the encoder E : Rd →Rl of an autoencoder (parametrised by θe), where the corresponding decoder (parametrised by θd) is given by D : Rl → Rd. For feminine and masculine words, we require the encoded space to retain the gender-related information. The squared losses, Lf and Lm, given respectively by (1) and (2), express the extent to which this constraint is satisfied. Lf = X w∈Vf ||Cf(E(w)) −1||2 2 + X w∈V\Vf ||Cf(E(w))||2 2 (1) Lm = X w∈Vm ||Cm(E(w)) −1||2 2 + X w∈V\Vm ||Cf(E(w))||2 2 (2) Here, for notational simplicity, we drop the dependence on parameters. For the stereotypical and gender-neutral words, we require that they are embedded into a subspace that is orthogonal to a gender directional vector, vg, computed using a set, Ω, of feminine and masculine word-pairs (wf, wm)(∈Ω) as given by (3). vg = 1 |Ω| X (wf,wm)∈Ω (E(wm) −E(wf)) (3) Prior work on gender debiasing (Bolukbasi et al., 2016; Zhao et al., 2018b) showed that the vector difference between the embeddings for malefemale word-pairs such as he and she accurately represents the gender direction. When training, we keep vg fixed during an epoch, and re-estimate vg between every epoch. We consider the squared inner-product between vg and the debiased stereotypical or gender-neutral words as the loss, Lg, as given by (4). Lg = X w∈Vn∪Vs (vg⊤w) 2 (4) It is important that we preserve the semantic information encoded in the word embeddings as much as possible when we perform debiasing. If too much information is removed from the word embeddings, not limited to gender-biases, then the debiased word embeddings might not be sufficiently accurate to be used in downstream NLP applications. For this purpose, we minimise the reconstruction loss, Lr, for the autoencoder given by (5). Lr = X w∈V ||D(E(w)) −w||2 2 (5) Finally, we define the total objective as the linearly-weighted sum of the above-defined losses as given by (6). L = λfLf + λmLm + λgLg + λrLr (6) Here, the coefficients λf, λm, λg, λr are nonnegative hyper-parameters that add to 1. They determine the relative importance of the different constraints we consider and can be learnt using training data or determined via cross-validation over a dedicated validation dataset. In our experiments, we use the latter approach. 3.2 Implementation and Training Cf and Cm are both implemented as feed forward neural networks with one hidden layer and the sigmoid function is used as the nonlinear activation. Increasing the number of hidden layers beyond one for Cf and Cm did not result in a significant increase in accuracy. Both the encoder E and the decoder D of the autoencoder are implemented as feed forward neural networks with two hidden layers. Hyperbolic tangent is used as the activation function throughout the autoencoder. The objective (6) is minimised w.r.t. the parameters θf, θm, θe and θd for a given pretrained set of word embeddings. During optimisation, we used dropout with probability 0.01 and use stochastic gradient descent with initial learning rate set to 0.1. The hyper-parameters λf, λm, λg, λr are estimated using a separate validation dataset as described later in subsection 4.1. Note that it is possible to pre-train Cf and Cm separately using Vf and Vm prior to training the full objective (6). In our preliminary experiments, we found that initialising θf and θm to the pretrained versions of Cf and Cm to be helpful for the optimisation process, resulting in early convergence to better solutions compared to starting from random initialisations for θf and θm. For pre-training Cf and Cm we used Adam optimiser (Kingma and Ba, 2015) with initial learning rate set to 0.0002 and a mini-batch size of 512. Autoencoder is also pre-trained using a randomly 1645 selected 5000 word embeddings and dropout regularisation is applied with probability 0.05. We note that Vf and Vm are separate word sets, not necessarily having corresponding femininemasculine pairs as in Ωused in (4). It is of course possible to re-use the words in Ωin Vf and Vm, and we follow this approach in our experiments, which helps to decrease the number of seed words required to train the proposed method. Moreover, the number of training examples across the four categories Vf, Vm, Vn, Vs were significantly different, which resulted in an imbalanced learning setting. We conduct one-sided undersampling (Kubat and Matwin, 1997) to successfully overcome this data imbalance issue. The code and the debiased embeddings are publicly available4. 4 Experiments 4.1 Training and Development Data We use the feminine and masculine word lists (223 words each) created by Zhao et al. (2018b) as Vf and Vm, respectively. To create a gender-neutral word list, Vn, we select gender-neutral words from a list of 3000 most frequent words in English5. Two annotators independently selected words and subsequently verified for gender neutrality. The final set of V contains 1031 gender-neutral words. We use the stereotypical word list compiled by Bolukbasi et al. (2016) as Vs, which contains 166 professions that are stereotypically associated with one type of a gender. The four sets of words used in the experiments are shown in the supplementary material. We train GloVe (Pennington et al., 2014) on 2017 January dump of English Wikipedia to obtain pre-trained 300-dimensional word embeddings for 322636 unique words. In our experiments, we set both d and l to 300 to create 300dimensional de-biased word embeddings. We randomly selected 20 words from each of the 4 sets Vf, Vm, Vn and Vs, and used them as a development set for pre-training Cf and Cm and to estimate the hyperparameters in (6). The optimal hyperparameter values estimated on this development dataset are: λf = λm = λg = 0.0001, and λr = 1.0. In our preliminary experiments we observed that increasing λf, λm and λg relative to λr results in higher reconstruction losses in the 4https://github.com/kanekomasahiro/gp_ debias 5https://bit.ly/2SvBINY autoencoder. This shows that the ability to accurately reconstruct the original word embeddings is an important requirement during debiasing. 4.2 Baselines and Comparisons We compare our proposed method against several baselines. GloVe: is the pre-trained GloVe embeddings described in subsection 4.1. This baseline denotes a non-debiased version of the word embeddings. Hard-GloVe: We use the implementation6 of hard-debiasing (Bolukbasi et al., 2016) method by the original authors and produce a debiased version of the pre-trained GloVe embeddings.7 GN-GloVe : We use debiased GN-GloVe embeddings released by the original authors8, without retraining ourselves as a baseline. AE (GloVe): We train an autoencoder by minimising the reconstruction loss defined in (5) and encode the pre-trained GloVe embeddings to a vector space with the same dimensionality. This baseline can be seen as surrogated version of the proposed method with λf = λm = λg = 0. AE (GloVe) does not perform debiasing and shows the amount of semantic information that can be preserved by autoencoding the input embeddings. AE (GN-GloVe): Similar to AE (GloVe), this method autoencodes the debiased word embeddings produced by GN-GloVe. GP (GloVe): We apply the proposed genderpreserving (GP) debiasing method on pre-trained GloVe embeddings to debias it. GP (GN-GloVe): To test whether we can use the proposed method to further debias word embeddings that are already debiased using other methods, we apply it on GN-GloVe. 4.3 Evaluating Debiasing Performance We use the SemBias dataset created by Zhao et al. (2018b) to evaluate the level of gender bias in word embeddings. Each instance in SemBias consists of four word pairs: a gender-definition word pair (Definition; e.g. “waiter - waitress”), 6https://github.com/tolga-b/debiaswe 7Bolukbasi et al. (2016) released debiased embeddings for word2vec only and for comparison purposes with GN-GloVe, we use GloVe as the pre-trained word embedding and apply hard-debiasing on GloVe 8https://github.com/uclanlp/gn_glove 1646 Embeddings SemBias SemBias-subset Definition ↑ Stereotype ↓ None ↓ Definition ↑ Stereotype ↓ None ↓ GloVe 80.2 10.9 8.9 57.5 20 22.5 Hard-Glove 84.1 9.5 6.4 25 47.5 27.5 GN-GloVe 97.7 1.4 0.9 75 15 10 AE (GloVe) 82.7 8.2 9.1 62.5† 17.5† 20 AE (GN-GloVe) 98.0†∗ 1.6†∗ 0.5†∗ 77.5 17.5† 5†∗ GP (GloVe) 84.3∗ 8.0 7.7∗ 65† 15† 20 GP (GN-GloVe) 98.4†∗ 1.1†∗ 0.5†∗ 82.5†∗ 12.5†∗ 5†∗ Table 1: Prediction accuracies for gender relational analogies. ∗and † indicate statistically significant differences against respectively GloVe and Hard-GloVe. a gender-stereotype word pair (Stereotype; e.g., “doctor - nurse”) and two other word-pairs that have similar meanings but not a gender relation (None; e.g., “dog - cat”, “cup - lid”). SemBias contains 20 gender-stereotype word pairs and 22 gender-definitional word pairs and use their Cartesian product to generate 440 instances. Among the 22 gender-definitional word pairs, 2 wordpairs are not used as the seeds for training. Following, Zhao et al. (2018b), to test the generalisability of a debiasing method, we use the subset (SemBias-subset) of 40 instances associated with these 2 pairs. We measure relational similarity between (he, she) word-pair and a word-pair (a, b) in SemBias using the cosine similarity between the # » he −# » she gender directional vector and a −b using the word embeddings under evaluation. For the four word-pairs in each instance in SemBias, we select the word-pair with the highest cosine similarity with # » he −# » she as the predicted answer. In Table 1, we show the percentages where a word-pair is correctly classified as Definition, Stereotype, or None. If the word embeddings are correctly debiased, we would expect a high accuracy for Definitions and low accuracies for Stereotypes and Nones. From Table 1, we see that the best performances (highest accuracy on Definition and lowest accuracy on Stereotype) are reported by GP (GNGloVe), which is the application of the proposed method to debias word embeddings learnt by GNGloVe. In particular, in both SemBias and SemBias-subset, GP (GN-GloVe) statistically significantly outperforms GloVe and Hard-Glove according to Clopper-Pearson confidence intervals (Clopper and Pearson, 1934). Although GNGloVe obtains high performance on SemBias, it does not generalise well to SemBias-subset. However, by applying the proposed method, we can further remove any residual gender biases from GN-GloVe, which shows that the proposed method can be applied in conjunction with GNGloVe. We see that GloVe contains a high percentage of stereotypical gender biases, which justifies the need for debiasing methods. By applying the proposed method on GloVe (corresponds to GP (GloVe)) we can decrease the gender biases in GloVe, while preserving useful gender-related information for detecting definitional word-pairs. Comparing corresponding AE and GP versions of GloVe and GN-GloVe, we see that autoencoding alone is insufficient to consistently preserve gender-related information. 4.4 Preservation of Word Semantics It is important that the debiasing process removes only gender biases and preserve other information unrelated to gender biases in the original word embeddings. If too much information is removed from word embeddings during the debiasing process, then the debiased embeddings might not carry adequate information for downstream NLP tasks that use those debiased word embeddings. To evaluate the semantic accuracy of the debiased word embeddings, following prior work on debiasing (Bolukbasi et al., 2016; Zhao et al., 2018a), we use them in two popular tasks: semantic similarity measurement and analogy detection. We recall that we do not propose novel word embedding learning methods in this paper, and what is important here is whether the debiasing process preserves as much information as possible in the 1647 Embeddings sem syn total MSR SE GloVe 80.1 62.1 70.3 53.8 38.8 Hard-GloVe 80.3 62.7 70.7 54.4 39.1 GN-GloVe 77.8 60.9 68.6 51.5 39.1 AE (GloVe) 81.0 61.9 70.5 52.6 38.9 AE (GN-GloVe) 78.6 61.3 69.2 51.2 39.1 GP (GloVe) 80.5 61.0 69.9 51.3 38.5 GP (GN-GloVe) 78.3 61.3 69.0 51.0 39.6 Table 2: Accuracy for solving word analogies. Datasets #Orig #Bal WS 353 366 RG 65 77 MTurk 771 784 RW 2,034 2,042 MEN 3,000 3,122 SimLex 999 1,043 Table 3: Number of word-pairs in the original (Orig) and balanced (Bal) similarity benchmarks. original word embeddings. 4.4.1 Analogy Detection Given three words a, b, c in analogy detection, we must predict a word d that completes the analogy “a is b as c is to d”. We use the CosAdd (Levy and Goldberg, 2014) that finds d that has the maximum cosine similarity with (b−a+c). We use the semantic (sem) and syntactic (syn) analogies in the Google analogy dataset (Mikolov et al., 2013b) (in total contains 19,556 questions), MSR dataset (7,999 syntactic questions) (Mikolov et al., 2013d) and SemEval dataset (SE, 79 paradigms) (Jurgens et al., 2012) as benchmark datasets. The percentage of correctly solved analogy questions is reported in Table 2. We see that there is no significant degradation of performance due to debiasing using the proposed method. 4.4.2 Semantic Similarity Measurement The correlation between the human ratings and similarity scores computed using word embeddings for pairs of words has been used as a measure of the quality of the word embeddings (Mikolov et al., 2013d). We compute cosine similarity between word embeddings and measure Spearman correlation against human ratings for the word-pairs in the following benchmark datasets: Word Similarity 353 dataset (WS) (Finkelstein et al., 2001), RubensteinGoodenough dataset (RG) (Rubenstein and Goodenough, 1965), MTurk (Halawi et al., 2012), rare words dataset (RW) (Luong et al., 2013), MEN dataset (Bruni et al., 2012) and SimLex dataset (Hill et al., 2015). Unfortunately, existing benchmark datasets for semantic similarity were not created considering gender-biases and contain many stereotypical examples. For example, in MEN, the word sexy has high human similarity ratings with lady and girl compared to man and guy. Furthermore, masculine words and soldier are included in multiple datasets with high human similarity ratings, whereas it is not compared with feminine words in any of the datasets. Although prior work studying gender bias have used these datasets for evaluation purposes (Bolukbasi et al., 2016; Zhao et al., 2018a), we note that high correlation with human ratings can be achieved with biased word embeddings. To address this issue, we balance the original datasets with respect to gender by including extra word pairs generated from the opposite sex with the same human ratings. For instance, if the wordpair (baby, mother) exists in the dataset, we add a new pair (baby, father) to the dataset. Ideally, we should re-annotate this balanced version of the dataset to obtain human similarity ratings. However, such a re-annotation exercise would be costly and inconsistent with the original ratings. Therefore, we resort to a proxy where we reassign the human rating for the original word-pair to its derived opposite gender version. Table 3 shows the number of word-pairs in the original (Orig) and balanced (Bal) similarity benchmarks. As shown in Table 4, GP (GloVe) and GP (GN1648 Embeddings WS RG MTurk RW MEN SimLex Orig Bal Orig Bal Orig Bal Orig Bal Orig Bal Orig Bal GloVe 61.6 62.9 75.3 75.5 64.9 63.9 37.3 37.5 73.0 72.6 34.7 35.9 Hard-GloVe 61.7 63.1 76.4 76.7 65.1 64.1 37.4 37.4 72.8 72.5 35.0 36.1 GN-GloVe 62.5 63.7 74.1 73.7 66.2 65.5 40.0 40.1 74.9 74.5 37.0 38.1 AE (GloVe) 61.3 62.6 77.1 76.8 64.9 64.1 35.7 35.8 71.9 71.5 34.7 35.9 AE (GN-GloVe) 61.3 62.6 73.0 74.0 66.3 65.5 38.7 38.9 73.8 73.4 36.7 37.7 GP (GloVe) 59.7 61.0 75.4 75.5 63.9 63.1 34.7 34.8 70.8 70.4 33.9 35.0 GP (GN-GloVe) 63.2 64.3 72.2 72.2 67.9 67.4 43.2 43.3 75.9 75.5 38.4 39.5 Table 4: Spearman correlation between human ratings and cosine similarity scores computed using word embeddings for the word-pairs in the original and balanced versions of the benchmark datasets. (a) GloVe (b) GN (GloVe) (c) Hard-Glove (d) GP (GloVe) Figure 1: Cosine similarity between gender, gender-neutral, stereotypical words and the gender direction. GloVe) obtain the best performance on the balanced versions of all benchmark datasets. Moreover, the performance of GP (GloVe) on both original and balanced datasets is comparable to that of GloVe, which indicates that the information encoded in GloVe embeddings are preserved in the debiased embeddings, while removing stereotypical gender biases. The autoencoded versions report similar performance to the original input embeddings. Overall, the results on the analogy detection and semantic similarity measurement tasks show that our proposed method removes only gender-biases and preserve other useful gender-related information. 4.5 Visualising the Effect of Debiasing To visualise the effect of debiasing on different word categories, we compute the cosine similarity between the gender directional vector # » he −# » she, 1649 and selected gender-oriented (female or male), gender-neutral and stereotypical words. In Figure 1, horizontal axises show the cosine similarity with the gender directional vector (positive scores for masculine words) and the words are alphabetically sorted within each category. From Figure 1, we see that the original GloVe embeddings show a similar spread of cosine similarity scores for gender-oriented as well as stereotypical words. When debiased by hard-debias (Hard-GloVe) and GN-GloVe, we see that stereotypical and gender-neutral words get their gender similarity scores equally reduced. Interestingly, Hard-GloVe shifts even gender-oriented words towards the masculine direction. On the other hand, GP (GloVe) decreases gender bias in the stereotypical words, while almost preserving gender-neutral and gender-oriented words as in GloVe. Considering that a significant number of words in English are gender-neutral, it is essential that debiasing methods do not adversely change their orientation. In particular, the proposed method’s ability to debias stereotypical words that carry unfair gender-biases, while preserving the genderorientation in feminine, masculine and neutral words is important when applying the debiased word embeddings in NLP applications that depend on word embeddings for representing the input texts 5 Conclusion We proposed a method to remove gender-specific biases from pre-trained word embeddings. Experimental results on multiple benchmark datasets demonstrate that the proposed method can accurately debias pre-trained word embeddings, outperforming previously proposed debiasing methods, while preserving useful semantic information. In future, we plan to extend the proposed method to debias other types of demographic biases such as ethnic, age or religious biases. References Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In NIPS. Elia Bruni, Gemma Boleda, Marco Baroni, and Nam Khanh Tran. 2012. Distributional semantics in technicolor. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 136–145. Association for Computational Linguistics. Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356:183–186. C. J. Clopper and E. S. Pearson. 1934. The use of confidence or fiducial limits illustrated in the case of the binomial. Biometrika, 26(4):404 – 413. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Yanai Elazar and Yoav Goldberg. 2018. Adversarial Removal of Demographic Attributes from Text Data. In Proc. of EMNLP. European Union. 1997. Treaty of amsterdam (article 13). Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th International Conference on World Wide Web, WWW ’01, pages 406–414, New York, NY, USA. ACM. Aparna Garimella, Carmen Banea, and Rada Mihalcea. 2017. Demographic-aware word associations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2275–2285, Copenhagen, Denmark. Association for Computational Linguistics. Anthony G. Greenwald, Debbie E. McGhee, and Jordan L. K. Schwatz. 1998. Measuring individual differences in implicit cognition: The implicit association test. Journal of Personality and Social Psychology, 74(6):1464–1480. Guy Halawi, Gideon Dror, Evgeniy Gabrilovich, and Yehuda Koren. 2012. Large-scale learning of word relatedness with constraints. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’12, pages 1406–1414, New York, NY, USA. ACM. Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4):665–695. Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daum´e III, Miro Dud´ık, and Hanna Wallach. 2018. Improving fairness in machine learning systems: What do industry practitioners need? David Jurgens, Saif Mohammad, Peter Turney, and Keith Holyoak. 2012. Semeval-2012 task 2: Measuring degrees of relational similarity. In *SEM 2012: The First Joint Conference on Lexical and 1650 Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 356–364. Association for Computational Linguistics. Faisal Kamiran and Toon Calders. 2009. Classifying without discriminating. In Proc. of International Conference on Computer, Control and Communication (IC4), pages 1–6. Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proc. of ICLR. Miroslav Kubat and Stan Matwin. 1997. Addressing the curse of imbalanced training sets: one-sided selection. In ICML 1997, pages 179 – 186. Omer Levy and Yoav Goldberg. 2014. Linguistic regularities in sparse and explicit word representations. In CoNLL. Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text representations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 25–30. Association for Computational Linguistics. Thang Luong, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 104–113. Association for Computational Linguistics. Tomas Mikolov, Kai Chen, and Jeffrey Dean. 2013a. Efficient estimation of word representation in vector space. In ICLR. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continous space word representations. In NAACL-HLT, pages 746 – 751. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013d. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746–751, Atlanta, Georgia. Association for Computational Linguistics. Jeffery Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: global vectors for word representation. In EMNLP, pages 1532–1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL-HLT. Marcelo O. R. Prates, Pedro H. C. Avelar, and Luis Lamb. 2018. Assessing Gender Bias in Machine Translation – A Case Study with Google Translate. Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Commun. ACM, 8(10):627–633. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8–14. Association for Computational Linguistics. Bei Shi, Zihao Fu, Lidong Bing, and Wai Lam. 2018. Learning domain-sensitive and sentimentaware word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2494–2504. Association for Computational Linguistics. The Telegraph. 2016. Microsoft deletes ‘teen girl’ ai after it became a hitlter-loving sex robot within 24 hours. https://goo.gl/mE8p3J. Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, and Graham Neubig. 2017. Controllable invariance through adversarial feature learning. In Proc. of NIPS. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204– 2213. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20. Association for Computational Linguistics. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and KaiWei Chang. 2018b. Learning Gender-Neutral Word Embeddings. In Proc. of EMNLP, pages 4847– 4853. Will Y. Zou, Richard Socher, Daniel Cer, and Christopher D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proc. of EMNLP’13, pages 1393 – 1398.
2019
160
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology Ran Zmigrod1 Sabrina J. Mielke2 Hanna Wallach3 Ryan Cotterell1 1 University of Cambridge 2 Johns Hopkins University 3 Microsoft Research [email protected] [email protected] [email protected] [email protected] Abstract Gender stereotypes are manifest in most of the world’s languages and are consequently propagated or amplified by NLP systems. Although research has focused on mitigating gender stereotypes in English, the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languages. We present a novel approach for converting between masculine-inflected and feminineinflected sentences in such languages. For Spanish and Hebrew, our approach achieves F1 scores of 82% and 73% at the level of tags and accuracies of 90% and 87% at the level of forms. By evaluating our approach using four different languages, we show that, on average, it reduces gender stereotyping by a factor of 2.5 without any sacrifice to grammaticality. 1 Introduction One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases. This is because NLP systems depend on language corpora, which are inherently “not objective; they are creations of human design” (Crawford, 2013). One type of societal bias that has received considerable attention from the NLP community is gender stereotyping (Garg et al., 2017; Rudinger et al., 2017; Sutton et al., 2018). Gender stereotypes can manifest in language in overt ways. For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women (De-Arteaga et al., 2019). To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English (Bolukbasi et al., 2016; Dixon et al., 2018; Zhao et al., 2017). Yet, gender stereotypes also exist in other languages Los ingenieros son expertos Analysis El ingeniero ser experto DET NOUN VERB ADJ [MSC; PL] [MSC; PL] [IN; PR; PL] [MSC; PL] Intervention El ingeniera ser experto DET NOUN VERB ADJ [MSC; PL] [FEM; PL] [IN; PR; PL] [MSC; PL] Inference El ingeniera ser experto DET NOUN VERB ADJ [FEM; PL] [FEM; PL] [IN; PR; PL] [FEM; PL] Reinflection Las ingenieras son expertas Figure 1: Transformation of Los ingenieros son expertos (i.e., The male engineers are skilled) to Las ingenieras son expertas (i.e., The female engineers are skilled). We extract the properties of each word in the sentence. We then fix a noun and its tags and infer the manner in which the remaining tags must be updated. Finally, we reinflect the lemmata to their new forms. because they are a function of society, not of grammar. Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement (Corbett, 1991). In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns. This means that if the gender of one word changes, the others have to be updated to match. As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped (Zhao et al., 2018), will yield ungrammatical sentences. Consider the Spanish phrase el ingeniero experto (the skilled engineer). Replacing ingeniero with ingeniera is insufficient—el must 1 also be replaced with la and experto with experta. In this paper, we present a new approach to counterfactual data augmentation (CDA; Lu et al., 2018) for mitigating gender stereotypes associated with animate1 nouns (i.e., nouns that represent people) for morphologically rich languages. We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns. We use this model as part of a four-step process, depicted in Fig. 1, to reinflect entire sentences following an intervention on the grammatical gender of one word. We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level F1 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively. We also conduct an extrinsic evaluation using four languages. Following Lu et al. (2018), we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality. 2 Gender Stereotypes in Text Men and women are mentioned at different rates in text (Coates, 1987). This problem is exacerbated in certain contexts. For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resum´e filtering system. Gender stereotypes of this sort have been observed in word embeddings (Bolukbasi et al., 2016; Sutton et al., 2018), contextual word embeddings (Zhao et al., 2019), and co-reference resolution systems (Rudinger et al., 2018; Zhao et al., 2018) inter alia. A quick fix: swapping gendered words. One approach to mitigating such gender stereotypes is counterfactual data augmentation (CDA; Lu et al., 2018). In English, this involves augmenting a corpus with additional sentences in which gendered words, such as he and she, have been swapped to yield a balanced representation. Indeed, Zhao et al. (2018) showed that this simple heuristic significantly reduces gender stereotyping in neural co-reference resolution systems, without harming system performance. Unfortunately, this approach 1Specifically, we consider a noun to be animate if WordNet considers person to be a hypernym of that noun. is only applicable to English and other languages with little morphological inflection. When applied to morphologically rich languages that exhibit gender agreement, it yields ungrammatical sentences. The problem: inflected languages. Many languages, including Spanish and Hebrew, have gender inflections for nouns, verbs, and adjectives— i.e., the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.2 This means that if the gender of one word changes, the others have to be updated to preserve morpho-syntactic agreement (Corbett, 2012). Consider the following example from Spanish, where we wish to transform Sentence (1) to Sentence (2). (Parts of words that mark gender are depicted in bold.) This task is not as simple as replacing el with la—ingeniero and experto must also be reinflected. Moreover, the changes required for one language are not the same as those required for another (e.g., verbs are marked with gender in Hebrew, but not in Spanish). (1) El The.MSC.SG ingeniero engineer.MSC.SG alem´an German.MSC.SG es is.IN.PR.SG muy very experto. skilled.MSC.SG (The German engineer is very skilled.) (2) La The.FEM.SG ingeniera engineer.FEM.SG alemana German.FEM.SG es is.IN.PR.SG muy very experta. skilled.FEM.SG (The German engineer is very skilled.) Our approach. Our goal is to transform sentences like Sentence (1) to Sentence (2) and vice versa. To the best of our knowledge, this task has not been studied previously. Indeed, there is no existing annotated corpus of paired sentences that could be used to train a supervised model. As a result, we take an unsupervised3 approach using dependency trees, lemmata, part-of-speech (POS) tags, and morpho-syntactic tags from Universal Dependencies corpora (UD; Nivre et al., 2018). Specifically, we propose the following four-step process: 1. Analyze the sentence (including parsing, morphological analysis, and lemmatization). 2The number of grammatical genders varies for different languages, with two being the most common non-zero number (Dryer and Haspelmath, 2013). The languages that we use in our evaluation have two grammatical genders (male, female). 3Because we do not have any direct supervision for the task of interest, we refer to our approach as being unsupervised even though it does rely on annotated linguistic resources. 2 [MSC; SG] [MSC; SG] [MSC; SG] [SG] [−] [MSC; SG] DET NOUN ADJ VERB ADV ADJ El ingeniero alem´an es muy experto det root amod cop amod advmod Figure 2: Dependency tree for the sentence El ingeniero alem´an es muy experto. 2. Intervene on a gendered word. 3. Infer the new morpho-syntactic tags. 4. Reinflect the lemmata to their new forms. This process is depicted in Fig. 1. The primary technical contribution is a novel Markov random field for performing step 3, described in the next section. 3 A Markov Random Field for Morpho-Syntactic Agreement In this section, we present a Markov random field (MRF; Koller and Friedman, 2009) for morphosyntactic agreement. This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags. Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement. A dependency tree for a sentence (see Fig. 2 for an example) is a set of ordered triples (i, j, ℓ), where i and j are positions in the sentence (or a distinguished root symbol) and ℓ∈L is the label of the edge i →j in the tree; each position occurs exactly once as the first element in a triple. Each dependency tree T is associated with a sequence of morpho-syntactic tags m = m1, . . . , m|T| and a sequence of part-ofspeech (POS) tags p = p1, . . . , p|T|. For example, the tags m ∈M and p ∈P for ingeniero are [MSC; SG] and NOUN, respectively, because ingeniero is a masculine, singular noun. For notational simplicity, we define M = M|T| to be the set of all length-|T| sequences of morpho-syntactic tags. We define the probability of m given T and p as Pr(m | T, p) ∝ Y (i,j,ℓ)∈T φi(mi) · ψ(mi, mj | pi, pj, ℓ), (1) where the binary factor ψ(·, · | ·, ·, ·) ≥0 scores how well the morpho-syntactic tags mi and mj agree given the POS tags pi and pj and the label ℓ. For example, consider the amod (adjectival modifier) edge from experto to ingeniero in Fig. 2. The factor ψ(mi, mj | A, N, amod) returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., mi = [MSC; SG] and mj = [MSC; SG]) and a low score if they do not (e.g., mi = [MSC; SG] and mj = [FEM; PL]). The unary factor φi(·) ≥0 scores a morpho-syntactic tag mi outside the context of the dependency tree. As we explain in §3.1, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them. Eq. (1) is normalized by the following partition function: Z(T, p) = X m′∈M Y (i,j,ℓ)∈T φi(m′ i) · ψ(m′ i, m′ j | pi, pj, ℓ). Z(T, p) can be calculated using belief propagation; we provide the update equations that we use in App. A. Our model is depicted in Fig. 3. It is noteworthy that this model is delexicalized—i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves. 3.1 Parameterization We consider a linear parameterization and a neural parameterization of the binary factor ψ(·, · | ·, ·, ·). Linear parameterization. We define a matrix W(pi, pj, ℓ) ∈Rc×c for each triple (pi, pj, ℓ), where c is the number of morpho-syntactic subtags. For example, [MSC; SG] has two subtags MSC and SG. We then define ψ(·, · | ·, ·, ·) as follows: ψ(mi, mj | pi, pj, ℓ) = exp (m⊤ i W(pi, pj, ℓ)mj), where mi ∈{0, 1}c is a multi-hot encoding of mi. Neural parameterization. As an alternative, we also define a neural parameterization of W(pi, pj, ℓ) to allow parameter sharing among 3 El ingeniero alem´an es muy experto φ1(·) φ2(·) φ3(·) φ4(·) φ5(·) φ6(·) ψ(·, · | D, N, det) ψ(·, · | A, N, amod) ψ(·, · | N, V, cop) ψ(·, · | AV, A, advmod) ψ(·, · | A, N, amod) Figure 3: Factor graph for the sentence El ingeniero alem´an es muy experto. edges with different parts of speech and labels: W(pi, pj, ℓ) = exp (U tanh(V [e(pi); e(pj); e(ℓ)])) where U ∈Rc×c×n1, V ∈Rn1×3n2, and n1 and n2 define the structure of the neural parameterization and each e(·) ∈Rn2 is an embedding function. Parameterization of φi. We use the unary factors only to force or disallow particular tags when performing an intervention. Specifically, we define φi(m) = ( α if m = mi 1 otherwise, (2) where α > 1 is a strength parameter that determines the extent to which mi should remain unchanged following an intervention. In the limit as α →∞, all tags will remain unchanged except for the tag directly involved in the intervention.4 3.2 Inference Because our MRF is acyclic and tree-shaped, we can use belief propagation (Pearl, 1988) to perform exact inference. The algorithm is a generalization of the forward-backward algorithm for hidden Markov models (Rabiner and Juang, 1986). Specifically, we pass messages from the leaves to the root and vice versa. The marginal distribution of a node is the point-wise product of all its incoming messages; the partition function Z(T, p) is the sum of any node’s marginal distribution. Computing Z(T, p) takes polynomial time (Pearl, 1988)— specifically, O(n · |M|2) where M is the number of morpho-syntactic tags. Finally, inferring the highest-probability morpho-syntactic tag sequence m⋆given T and p can be performed using the max-product modification to belief propagation. 4In practice, α is set using development data. Language Accuracy Language Accuracy French 93.17 Italian 98.29 Hebrew 95.16 Spanish 97.78 Table 1: Morphological reinflection accuracies. 3.3 Parameter Estimation We use gradient-based optimization. We treat the negative log-likelihood −log (Pr(m | T, p)) as the loss function for tree T and compute its gradient using automatic differentiation (Rall, 1981). We learn the parameters of §3.1 by optimizing the negative log-likelihood using gradient descent. 4 Intervention As explained in §2, our goal is to transform sentences like Sentence (1) to Sentence (2) by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morphosyntactic agreement. For example, if we change the morpho-syntactic tag for ingeniero from [MSC;SG] to [FEM;SG], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [IN; PR; SG]. If we intervene on the ith word in a sentence, changing its tag from mi to m′ i, then using our model to infer the manner in which the remaining tags must be updated means using Pr(m−i | m′ i, T, p) to identify high-probability tags for the remaining words. Crucially, we wish to change as little as possible when intervening on a gendered word. The unary factors φi enable us to do exactly this. As described in the previous section, the strength parameter α determines the extent to which mi should remain unchanged following an intervention—the larger the value, the less likely it is that mi will be changed. Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms. 4 Language Training Size Annotated Test Size Hebrew 5,241 111 Spanish 14,187 136 French 14,554 – Italian 12,837 – Table 2: Language data. This task has received considerable attention from the NLP community (Cotterell et al., 2016, 2017). We use the inflection model of Wu et al. (2018). This model conditions on the lemma x and morphosyntactic tag m to form a distribution over possible inflections. For example, given experto and [A; FEM; PL], the trained inflection model will assign a high probability to expertas. We provide accuracies for the trained inflection model in Tab. 1. 5 Experiments We used the Adam optimizer (Kingma and Ba, 2014) to train both parameterizations of our model until the change in dev-loss was less than 10−5 bits. We set β = (0.9, 0.999) without tuning, and chose a learning rate of 0.005 and weight decay factor of 0.0001 after tuning. We tuned log α in the set {0.5, 0.75, 1, 2, 5, 10} and chose log α = 1. For the neural parameterization, we set n1 = 9 and n2 = 3 without any tuning. Finally, we trained the inflection model using only gendered words. We evaluate our approach both intrinsically and extrinsically. For the intrinsic evaluation, we focus on whether our approach yields the correct morphosyntactic tags and the correct reinflections. For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models. 5.1 Intrinsic Evaluation To the best of our knowledge, this task has not been studied previously. As a result, there is no existing annotated corpus of paired sentences that can be used as “ground truth.” We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language. Specifically, for each language, we extracted sentences containing animate nouns from that language’s UD treebank. The average length of these extracted sentences was 37 words. We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly. We chose Spanish and Hebrew because gender agreement operates differTag Form P R F 1 Acc Acc Hebrew–BASE 89.04 40.12 55.32 86.88 83.63 Hebrew–LIN 87.07 62.35 72.66 90.5 86.75 Hebrew–NN 87.18 62.96 73.12 90.62 86.25 Spanish–BASE 96.97 51.45 67.23 90.21 86.32 Spanish–LIN 92.74 73.95 82.29 93.79 89.52 Spanish–NN 95.34 72.35 82.27 93.91 89.65 Table 3: Tag-level precision, recall, F1 score, and accuracy and form-level accuracy for the baselines (“– BASE”) and for our approach (“–LIN” is the linear parameterization, “–NN” is the neural parameterization). ently in each language. We provide corpus statistics for both languages in the top two rows of Tab. 2. We created a hard-coded ψ(·, · | ·, ·, ·) to serve as a baseline for each language. For Spanish, we only activated, i.e. set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns. We created two separate baselines because gender agreement operates differently in each language. To evaluate our approach, we held all morphosyntactic subtags fixed except for gender. For each annotated sentence, we intervened on the gender of the animate noun. We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata. Finally, we used the annotations to compute the taglevel F1 score and the form-level accuracy, excluding the animate nouns on which we intervened. Results. We present the results in Tab. 3. Recall is consistently significantly lower than precision. As expected, the baselines have the highest precision (though not by much). This is because they reflect well-known rules for each language. That said, they have lower recall than our approach because they fail to capture more subtle relationships. For both languages, our approach struggles with conjunctions. For example, consider the phrase ´el es un ingeniero y escritor (he is an engineer and a writer). Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora. This is because two nouns do not normally need to have the same gender when they are conjoined. Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person. Note 5 Esp Fra Heb Ita 0 2 4 6 Gender Bias Original Swap MRF Esp Fra Heb Ita 1 1.5 2 2.5 Grammaticality Original Swap MRF Figure 4: Gender stereotyping (left) and grammaticality (right) using the original corpus, the corpus following CDA using na¨ıve swapping of gendered words (“Swap”), and the corpus following CDA using our approach (“MRF”). that including co-reference information in our MRF would create cycles and inference would no longer be exact. Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs. Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization. We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient. 5.2 Extrinsic Evaluation We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping. Following Lu et al. (2018), focus on neural language models. We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages. As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model Plm for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful. The translations we use for these adjectives are given in App. B. We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes. For example, consider log P x∈Σ∗Plm(BOS El ingeniero bueno x) P x∈Σ∗Plm(BOS La ingeniera buena x). If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la Language No. Animate Noun Pairs % of Animate Sentences Hebrew 95 20% Spanish 259 20% Italian 150 10% French 216 7% Table 4: Animate noun statistics. ingeniera bueno (the good female engineer). If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive. In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive. If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero. Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): log P x∈Σ∗Plm(BOS El ingeniero bueno x) P x∈Σ∗Plm(BOS El ingeniera bueno x). We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see Tab. 2). For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using Dozat and Manning (2016)’s parser and extracted taggings and lemmata using the method of M¨uller et al. (2015). We automatically extracted an animacy gazetteer from WordNet (Bond and Paik, 2012) and then manually filtered the output for correctness. We provide the size of the languages’ animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in Tab. 4. For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on 6 Original Swap MRF −5 0 5 Gender Bias Feminine Masculine Figure 5: Gender stereotyping for words that are stereotyped toward men or women in Spanish using the original corpus, the corpus following CDA using na¨ıve swapping of gendered words (“Swap”), and the corpus following CDA using our approach (“MRF”). the noun, and then used our approach to transform the sentence. For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders. Choosing which sentences to duplicate is a difficult task. For example, alem´an in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations. Multilingual animacy detection (Jahan et al., 2018) might help with this challenge; co-reference information might additionally help. For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of Mielke and Eisner (2018) using the original corpus, the corpus following CDA using na¨ıve swapping of gendered words, and the corpus following CDA using our approach. We then computed gender stereotyping and grammaticality as described above. We provide example phrases in Tab. 5; we provide a more extensive list of phrases in App. C. Results Fig. 4 demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using na¨ıve swapping of gendered words, and the corpus following CDA using our approach. It is immediately apparent that our approch reduces gender stereotyping. On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively). We expected that na¨ıve swapping of gendered words would also reduce gender stereotyping. Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages. For Spanish, we also examine specific words that are stereotyped Phrase Original Swap MRF 1. El ingeniero bueno -27.6 -27.8 -28.5 2. La ingeniera buena -31.3 -31.6 -30.5 3. *El ingeniera bueno -32.2 -27.1 -33.5 4. *La ingeniero buena -33.2 -32.8 -33.6 Gender stereotyping 3.7 6.2 2 Grammaticality 3.25 0.25 4.05 Table 5: Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using na¨ıve swapping of gendered words (“Swap”), and the corpus following CDA using our approach (“MRF”). Phrases 1 and 2 are grammatical, while phrases 3 and 4 are not (dentoted by “*”). Gender stereotyping is measured using phrases 1 and 2. Grammaticality is measured using phrases 1 and 3 and using phrases 2 and 4; these scores are then averaged. toward men or women. We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender. Fig. 5 suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women. The grammaticality of the corpora following CDA differs between languages. That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than na¨ıve swapping of gendered words and sometimes increases grammaticality over the original corpus. Given that we know the model did not perform as accurately for Hebrew (see Tab. 3), this finding is not surprising. 6 Related Work In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology—specifically languages that exhibit gender agreement. To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English. For example, Bolukbasi et al. (2016) proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; Lu et al. (2018) studied gender stereotypes in language models; and Rudinger et al. (2018) introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution. The most closely related work is that of Zhao et al. (2018), who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages. Our approach is specifically intended to yield grammatical sentences when applied to such languages. Habash et al. (2019) also focused on morphologically rich 7 languages, specifically Arabic, but in the context of gender identification in machine translation. 7 Conclusion We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages. To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns. To the best of our knowledge, this task has not been studied previously. As a result, there is no existing annotated corpus of paired sentences that can be used as “ground truth.” Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results. For example, we demonstrated that our approach reduces gender stereotyping in neural language models. Finally, we also identified avenues for future work, such as the inclusion of co-reference information. Acknowledgments The last author acknowledges a Facebook Fellowship. References Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Tauman Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, pages 4349–4357. Francis Bond and Kyonghee Paik. 2012. A survey of WordNets and their licenses. In Proceedings of the 6th Global WordNet Conference (GWC 2012), Matsue. 64–71. Jennifer Coates. 1987. Women, Men and Language: A Sociolinguistic Account of Sex Differences in Language. Longman. Greville G. Corbett. 1991. Gender. Cambridge University Press. Greville G. Corbett. 2012. Features. Cambridge University Press. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, G´eraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra K¨ubler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017. CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, pages 1–30, Vancouver. Association for Computational Linguistics. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared task— morphological reinflection. In Proceedings of the 2016 Meeting of SIGMORPHON, Berlin, Germany. Association for Computational Linguistics. Kate Crawford. 2013. The hidden biases in big data. Maria De-Arteaga, Alexey Romanov, Hanna M. Wallach, Jennifer T. Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Cem Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019, Atlanta, GA, USA, January 29-31, 2019, pages 120–128. Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigating unintended bias in text classification. Timothy Dozat and Christopher D. Manning. 2016. Deep biaffine attention for neural dependency parsing. CoRR, abs/1611.01734. Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2017. Word embeddings quantify 100 years of gender and ethnic stereotypes. CoRR, abs/1711.08412. Nizar Habash, Houda Bouamor, and Christine Chung. 2019. Automatic gender identification and reinflection in arabic. In Proceedings of the 1st ACL Workshop on Gender Bias for Natural Language Processing, Florence, Italy. Labiba Jahan, Geeticka Chauhan, and Mark Finlayson. 2018. A new approach to animacy detection. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1–12. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Daphne Koller and Nir Friedman. 2009. Probabilistic graphical models: Principles and techniques. MIT Press. Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. 2018. Gender bias in neural natural language processing. CoRR, abs/1807.11714. 8 Sabrina J. Mielke and Jason Eisner. 2018. Spell once, summon anywhere: A two-level open-vocabulary language model. CoRR, abs/1804.08205. Thomas M¨uller, Ryan Cotterell, Alexander Fraser, and Hinrich Sch¨utze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2268–2274. Association for Computational Linguistics. Joakim Nivre, Mitchell Abrams, ˇZeljko Agi´c, Lars Ahrenberg, Lene Antonsen, Katya Aplonova, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, Victoria Basmov, John Bauer, Sandra Bellato, Kepa Bengoetxea, Yevgeni Berzak, Irshad Ahmad Bhat, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Rogier Blokland, Victoria Bobicev, Carl B¨orstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Aljoscha Burchardt, Marie Candito, Bernard Caron, Gauthier Caron, G¨uls¸en Cebiro˘glu Eryi˘git, Flavio Massimiliano Cecchini, Giuseppe G. A. Celano, Slavom´ır ˇC´epl¨o, Savas Cetin, Fabricio Chalub, Jinho Choi, Yongseok Cho, Jayeol Chun, Silvie Cinkov´a, Aur´elie Collomb, C¸ a˘grı C¸ ¨oltekin, Miriam Connor, Marine Courtin, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Carly Dickerson, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Tomaˇz Erjavec, Aline Etienne, Rich´ard Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cl´audia Freitas, Katar´ına Gajdoˇsov´a, Daniel Galbraith, Marcos Garcia, Moa G¨ardenfors, Sebastian Garza, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh G¨okırmak, Yoav Goldberg, Xavier G´omez Guinovart, Berta Gonz´ales Saavedra, Matias Grioni, Normunds Gr¯uz¯ıtis, Bruno Guillaume, C´eline GuillotBarbance, Nizar Habash, Jan Hajiˇc, Jan Hajiˇc jr., Linh H`a M˜y, Na-Rae Han, Kim Harris, Dag Haug, Barbora Hladk´a, Jaroslava Hlav´aˇcov´a, Florinel Hociung, Petter Hohle, Jena Hwang, Radu Ion, Elena Irimia, O. l´aj´ıd´e Ishola, Tom´aˇs Jel´ınek, Anders Johannsen, Fredrik Jørgensen, H¨uner Kas¸ıkara, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Boris Katz, Tolga Kayadelen, Jessica Kenney, V´aclava Kettnerov´a, Jesse Kirchner, Kamil Kopacewicz, Natalia Kotsyba, Simon Krek, Sookyoung Kwak, Veronika Laippala, Lorenzo Lambertino, Lucia Lam, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phuong Lˆe H`ˆong, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Keying Li, KyungTae Lim, Nikola Ljubeˇsi´c, Olga Loginova, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, C˘at˘alina M˘ar˘anduc, David Mareˇcek, Katrin Marheinecke, H´ector Mart´ınez Alonso, Andr´e Martins, Jan Maˇsek, Yuji Matsumoto, Ryan McDonald, Gustavo Mendonc¸a, Niko Miekka, Margarita Misirpashayeva, Anna Missil¨a, C˘at˘alin Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Keiko Sophie Mori, Shinsuke Mori, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Yugo Murawaki, Kaili M¨u¨urisep, Pinkey Nainwani, Juan Ignacio Navarro Hor˜niacek, Anna Nedoluzhko, Gunta Neˇspore-B¯erzkalne, Luong Nguy˜ˆen Thi., Huy`ˆen Nguy˜ˆen Thi. Minh, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Ad´edayo. Ol´u`okun, Mai Omura, Petya Osenova, Robert ¨Ostling, Lilja Øvrelid, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Guilherme Paulino-Passos, Siyao Peng, CenelAugusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Emily Pitler, Barbara Plank, Thierry Poibeau, Martin Popel, Lauma Pretkalnin¸a, Sophie Pr´evost, Prokopis Prokopidis, Adam Przepi´orkowski, Tiina Puolakainen, Sampo Pyysalo, Andriela R¨a¨abis, Alexandre Rademaker, Loganathan Ramasamy, Taraka Rama, Carlos Ramisch, Vinit Ravishankar, Livy Real, Siva Reddy, Georg Rehm, Michael Rießler, Larissa Rinaldi, Laura Rituma, Luisa Rocha, Mykhailo Romanenko, Rudolf Rosa, Davide Rovati, Valentin Ros,ca, Olga Rudina, Jack Rueter, Shoval Sadde, Benoˆıt Sagot, Shadi Saleh, Tanja Samardˇzi´c, Stephanie Samson, Manuela Sanguinetti, Baiba Saul¯ıte, Yanin Sawanakunanon, Nathan Schneider, Sebastian Schuster, Djam´e Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Muh Shohibussirri, Dmitry Sichinava, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simk´o, M´aria ˇSimkov´a, Kiril Simov, Aaron Smith, Isabela Soares-Bastos, Carolyn Spadine, Antonio Stella, Milan Straka, Jana Strnadov´a, Alane Suhr, Umut Sulubacak, Zsolt Sz´ant´o, Dima Taji, Yuta Takahashi, Takaaki Tanaka, Isabelle Tellier, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdeˇnka Ureˇsov´a, Larraitz Uria, Hans Uszkoreit, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Lars Wallin, Jing Xian Wang, Jonathan North Washington, Seyi Williams, Mats Wir´en, Tsegay Woldemariam, Tak-sum Wong, Chunxiao Yan, Marat M. Yavrumyan, Zhuoran Yu, Zdenˇek ˇZabokrtsk´y, Amir Zeldes, Daniel Zeman, Manying Zhang, and Hanzhi Zhu. 2018. Universal dependencies 2.3. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University. Judea Pearl. 1988. Probabilistic reasoning in intelligent systems: Networks of plausible inference. Morgan Kaufmann Publishers. Lawrence R. Rabiner and Biing-Hwang Juang. 1986. An introduction to hidden Markov models. IEEE ASSP Magazine, 3(1):4–16. Louis B. Rall. 1981. Automatic Differentiation: Techniques and Applications, volume 120 of Lecture Notes in Computer Science. Springer. 9 Rachel Rudinger, Chandler May, and Benjamin Van Durme. 2017. Social bias in elicited natural language inferences. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 74–79. Association for Computational Linguistics. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8–14. Association for Computational Linguistics. Adam Sutton, Thomas Lansdall-Welfare, and Nello Cristianini. 2018. Biased embeddings from wild data: Measuring, understanding and removing. CoRR, abs/1806.06301. Shijie Wu, Pamela Shapiro, and Ryan Cotterell. 2018. Hard non-monotonic attention for character-level transduction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4425–4438. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 629–634, Minneapolis, Minnesota. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. pages 2979–2989. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. pages 15–20. 10 A Belief Propagation Update Equations Our belief propagation update equations are µi→f(m) = Y f′∈N(i)\{f} µf′→i(m) (3) µfi→i(m) = φi(m) µi→fi(m) (4) µfij→i(m) = X m′∈M ψ(m′, m | pi, pj, ℓ) µj→fij(m′) (5) µfij→j(m) = X m′∈M ψ(m, m′ | pi, pj, ℓ) µi→fij(m′) (6) where N(i) returns the set of neighbouring nodes of node i. The belief at any node is given by β(v) = Y f∈N(v) µf→v(m). (7) B Adjective Translations Tab. 6 and Tab. 7 contain the feminine and masculine translations of the four adjectives that we used. Adjective French Hebrew Italian Spanish good bonneטובהbuona buena bad mauvaiseרעהcattiva mala smart intelligenteחכמהintelligenti inteligente beautiful belleיפהbella hermosa Table 6: Feminine translations of good, bad, smart, beautiful in French, Hebrew, Italian, and Spanish Adjective French Hebrew Italian Spanish good bonטובbuono bueno bad mauvaisרעcattivo malo smart intelligent Mחכintelligente inteligente beautiful belיפהbello hermoso Table 7: Masculine translations of good, bad, smart, beautiful in French, Hebrew, Italian, and Spanish C Extrinsic Evaluation Example Phrases For each noun in our animacy gazetteer, we generated sixteen phrases. Consider the noun engineer as an example. We created four phrases—one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer. These phrases, as well as their prefix log-likelihoods are provided below in Tab. 8. Phrase Original Swap MRF El ingeniero bueno -27.63 -27.80 -28.50 La ingeniera buena -31.34 -31.65 -30.46 *El ingeniera bueno -32.22 -27.06 -33.49 *La ingeniero buena -33.22 -32.80 -33.56 El ingeniero mal -30.45 -30.90 -30.86 La ingeniera mala -31.03 -29.63 -30.59 *El ingeniera mal -34.19 -30.17 -35.15 *La ingeniero mala -33.09 -30.80 -33.81 El ingeniero inteligente -26.19 -25.49 -26.64 La ingeniera inteligente -29.14 -26.31 -27.57 *El ingeniera inteligente -29.80 -24.99 -30.77 *La ingeniero inteligente -31.00 -27.12 -30.16 El ingeniero hermoso -28.74 -28.65 -29.13 La ingeniera hermosa -31.21 -29.25 -30.04 *El ingeniera hermoso -32.54 -27.97 -33.83 *La ingeniero hermosa -33.55 -30.35 -32.96 Table 8: Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using na¨ıve swapping of gendered words (“Swap”), and the corpus following CDA using our approach (“MRF”). Ungrammatical phrases are denoted by “*”. 11
2019
161
A Transparent Framework for Evaluating Unintended Demographic Bias in Word Embeddings Chris Sweeney and Maryam Najafian Massachusetts Institute of Technology Cambridge, MA, USA {csweeney,najafian}@mit.edu Abstract Word embedding models have gained a lot of traction in the Natural Language Processing community, however, they suffer from unintended demographic biases. Most approaches to evaluate these biases rely on vector space based metrics like the Word Embedding Association Test (WEAT). While these approaches offer great geometric insights into unintended biases in the embedding vector space, they fail to offer an interpretable meaning for how the embeddings could cause discrimination in downstream NLP applications. In this work, we present a transparent framework and metric for evaluating discrimination across protected groups with respect to their word embedding bias. Our metric (Relative Negative Sentiment Bias, RNSB) measures fairness in word embeddings via the relative negative sentiment associated with demographic identity terms from various protected groups. We show that our framework and metric enable useful analysis into the bias in word embeddings. 1 Introduction Word embeddings have established themselves as an integral part of Natural Language Processing (NLP) applications. Unfortunately word embeddings have also introduced unintended biases that could cause downstream NLP systems to be unfair. Recent studies have shown that word embeddings exhibit unintended gender and stereotype biases inherent in the training corpus. Bias can be defined as an unfair expression of prejudice for or against a person, a group, or an idea. Bias is a broad term, which covers a range of problems particularly relevant in natural language systems such as, discriminatory gender bias (Bolukbasi et al., 2016a; Zhao et al., 2017), bias against regionally accented speech (Najafian et al., 2016, 2017), personal or political view bias (Iyyer et al., 2014; Recasens et al., 2013), and many other examples. In Figure 1: 2-D PCA embeddings for positive/negative sentiment words and a set of national origin identity terms. Geometrically, it is difficult to parse how these embeddings can lead to discrimination. our work, we restrict our definition of bias to unequal distributions of negative sentiment among demographic identity terms in word embeddings. One could also look at unequal distributions of positive sentiment, but for this work we restrict ourselves to the negative case. Sentiment analysis makes up a large portion of current NLP systems. Therefore, preventing negative sentiment from mixing with sensitive attributes (i.e. race, gender, religion) in word embeddings is needed to prevent discrimination in ML models using the embeddings. As studied in (Packer et al., 2018), unintentionally biased word embeddings can have adverse consequences when deployed in applications, such as movie sentiment analyzers or messaging apps. Negative sentiment can be unfairly entangled in the word embeddings, and detecting this unintended bias is a difficult problem. We need clear signals to evaluate which groups are discriminated against due to the bias in an embedding model. That way we can pinpoint where to mitigate those biases. To demonstrate this need for clear signals of bias in word embeddings, we look at Figure 1. Figure 1 shows a 2D word embedding projection of positive sentiment (green) and negative sentiment (red) words. It would be unfair for any given demographic identity word vector (blue) to be more semantically related to negative terms than the other identities. However, many identity terms exist closer to negative words than other identity terms in the vector space. This bias may affect a downstream ML model, but the vector space has no absolute interpretable meaning, especially when it comes to whether this embedding model will lead to a unfairly discriminative algorithm. Our framework enables transparent insights into word embedding bias by instead viewing the output of a simple logistic regression algorithm trained on an unbiased positive/negative word sentiment dataset initialized with biased word vectors. We use this framework to create a clear metric for unintended demographic bias in word embeddings. 2 Prior Work Researchers have found a variety of ways in which dangerous unintended bias can show up in NLP applications (Blodgett and O’Connor, 2017; Hovy and Spruit, 2016; Tatman, 2017). Mitigating such biases is a difficult problem, and researchers have created many ways to make fairer NLP applications. Much of the focus for mitigating unintended bias in NLP is either targeted at reducing gender stereotypes in text (Bolukbasi et al., 2016b,a; Zhao et al., 2017; Zhang et al., 2018), or inequality of sentiment or toxicity for various protected groups (Caliskan-Islam et al., 2016; Bakarov, 2018; Dixon et al.; Garg et al., 2018; Kiritchenko and Mohammad, 2018). More specifically, word embeddings has been an area of focus for evaluating unintended bias. (Bolukbasi et al., 2016b) defines a useful metric for identifying gender bias and (Caliskan-Islam et al., 2016) defines a metric called the WEAT score for evaluating unfair correlations with sentiment for various demographics in text. Unfortunately metrics like these leverage vector space arguments between only two identities at a time like man vs woman (Bolukbasi et al., 2016a), or European American names vs. African American names (Caliskan-Islam et al., 2016). Though geometrically intuitive, these tests do not have a direct relation to discrimination in general. Our framework and RNSB metric enable a clear evaluation of discrimination with respect to word embedding bias for a whole class of demographics. 3 Methods We present our framework for understanding and evaluating unintentional demographic bias in word embeddings. We first describe the flow of our framework. Then, we address which datasets/models were chosen for our approach. Finally, we show how our framework can enable analysis and new metrics like RNSB. 3.1 Framework Figure 2: We isolate unintended bias to the word embeddings by training a logistic regression classifier on a unbiased positive/negative word sentiment dataset (initialized with the biased word embeddings). We measure word embedding bias by analyzing the predicted probability of negative sentiment for identity terms. Our framework enables the evaluation of unintended bias in word embeddings through the results of negative sentiment predictions. Our framework has a simple layout. Figure 2 shows the flow of our system. We first use the embedding model we are trying to evaluate to initialize vectors for an unbiased positive/negative word sentiment dataset. Using this dataset, we train a logistic classification algorithm to predict the probability of any word being a negative sentiment word. After training, we take a set of neutral identity terms from a protected group (i.e. national origin) and predict the probability of negative sentiment for each word in the set. Neutral identity terms that are unfairly entangled with negative sentiment in the word embeddings will be classified like their neighboring sentiment words from the sentiment dataset. We leverage this set of negative sentiment probabilities to summarize unintended demographic bias using RNSB. 3.2 Models and Data We evaluate three pretrained embedding models: GloVe (Pennington et al., 2014), Word2vec (Mikolov et al., 2013) (trained on the large Google News corpus), and ConceptNet (Speer et al., 2017). GloVe and Word2vec embeddings have been shown to contain unintended bias in (Bolukbasi et al., 2016a; Caliskan-Islam et al., 2016). ConceptNet has been shown to be less biased than these models (Speer, 2017) due to the mixture of curated corpora used for training. As part of our pipeline, we also use a labeled positive/negative sentiment training set (Hu and Liu, 2004). This dataset has been shown to be a trustworthy lexicon for negative and positive sentiment words (Pang et al., 2008; Liu, 2012; Wilson et al., 2005). We trust these labels to be unbiased so that we may isolate the unintended biases entering our system to the word embeddings. Finally, we use a simple logistic regression algorithm to predict negative sentiment. Although the choice of ML model can have an impact on fairness for sentiment applications as shown in (Kiritchenko and Mohammad, 2018), we choose a simple ML model to limit the possible unintended biases introduced downstream from our word embeddings. 3.3 Bias Analysis: RNSB We now present our metric for unintended demographic bias, RNSB. For gold standard labeled positive/negative sentiment words, (xi, yi), in training set, S, where xi is a word vector from a possibly biased word embedding model, we find the minimizer, f∗(xi) = σ(wT xi), for the logistic loss, l, and learned weights, w. minw∈Rd n X i=0 l(yi, wT xi) + λ∥w∥2, λ > 0 Then for a set, K = {k1, ..., kt}, of t demographic identity word vectors from a particular protected group (i.e. national origin, religion, etc.), we define a set, P, containing the predicted negative sentiment probability via minimizer, f∗, normalized to be one probability mass. P = ( f∗(k1) Pt i=1 f∗(ki) , ..., f∗(kt) Pt i=1 f∗(ki) ) Thus, our metric, RNSB(P), is defined as the KL divergence of P from U, where U is the uniform distribution for t elements. RNSB(P) = DKL (P∥U) We choose our set of neutral identity terms based on the most populous demographics for each protected group. However, due to the simplicity of this method, one can easily adapt it to include identity terms that suit the application in need of analysis. Since neutral identity terms are inherently not associated with sentiment, it is unfair to have identity term with differing levels of negative sentiment. This type of discrimination can show up in many downstream sentiment analysis applications. Thus, we want no differences between negative sentiment predictions of various identity terms. Mathematically, this can be represented as a uniform distribution of negative sentiment probability for identity terms from a protected group. Our RNSB metric captures the distance, via KL divergence, between the current distribution of negative sentiment and the fair uniform distribution. So the more fair a word embedding model with respect to sentiment bias, the lower the RNSB metric. 4 Results and Discussion We evaluate our framework and metric on two cases studies: National Origin Discrimination and Religious Discrimination. For each case study, we create a set of the most frequent identity terms from the protected groups in the Wikipedia word corpus and analyze bias with respect to these terms via our framework. First, we compare the RNSB metric for 3 pretrained word embeddings, showing that our metric is consistent with other word embedding analysis like WEAT (Caliskan-Islam et al., 2016). We then show that our framework enables an insightful view into word embedding bias. 4.1 RNSB Metric on Word Embeddings We vary the word embeddings used in our framework and calculate the RNSB metric for each embedding. The results are displayed in Table 1. For both case studies, the bias is largest in GloVe, as shown by the largest RNSB metric. As mentioned earlier, ConceptNet is a state of the art model that mixes models like GloVe and Word2vec, creating fairer word embeddings. Through the RNSB metric, one can see that the unintended demographic Figure 3: Histograms showing relative negative sentiment probability between national origin identity terms. The top left graph is GloVe, the top right is ConceptNet. The bottom histogram is the uniform distribution of negative sentiment in a perfect fair scenario. bias of these word embeddings is an order of magnitude lower than GloVe or Word2vec. Although the RNSB metric is not directly comparable to WEAT scores, these results are still consistent with some of the bias predicted by (Caliskan-Islam et al., 2016). The WEAT score shows that word embeddings like Word2vec and GloVe are biased with respect to national origin because European-American names are more correlated with positive sentiment than AfricanAmerican names. RNSB captures the same types of biases, but has a clear and larger scope, measuring discrimination with respect to more than two demographics within a protected group. Case Study GloVe Word2Vec ConceptNet National Origin Identity 0.6225 0.1945 0.0102 Religion Identity 0.3692 0.1026 0.0291 Table 1: Table showing our RNSB metric for various word embeddings on two case studies. Our metric effectively predicts the unintended demographic bias in the presented word embeddings with respect to negative sentiment. 4.2 Analyzing Unintended Demographic Bias in Word Embeddings Using the probability distribution of negative sentiment for the identity terms in a protected group, we can gain insights into the relative risks for discrimination between various demographics. Figure 3 shows three histograms. The bottom histogram is the uniform distribution. As described earlier, zero unintended demographic bias with respect to our definition is achieved when all the identity terms within a protected group have equal negative sentiment. The top two histograms show the negative sentiment probability for each identity normalized across all terms to be a probability distribution. The left histogram is computed using the GloVe word embeddings, and the right histogram is computed using the fairer ConceptNet embeddings. One can see that certain demographics have very high negative sentiment predictions, while others have very low predictions. The ConceptNet distribution seems to equalize much of this disparity. This type of analysis is very insightful as it enables one to see which identities are more at risk for discrimination. A more direct way to measure how certain groups receive similar unfair treatment is to compute a correlation matrix between the vectors containing negative sentiment predictions for each identity term. We compute this matrix for the same two cases: GloVe word embeddings (top) and ConceptNet word embeddings (bottom) shown in Figure 4. The GloVe word embedding correlation matrix contains a lot of dark low correlations between identities, as a lot of identities contain small amounts of negative sentiment. But this visual brings out that certain groups like Indian, Mexican, and Russian have a high correlation, indicating that they could be treated similarly unfairly in a downstream ML algorithm. This is a useful insight that could allow a practitioner to change to embedding training corpora to create fairer models. For the ConceptNet word embeddings, we see a much more colorful heat map, indicating there are higher correlations between more identity terms. This hints that ConceptNet contains less targeted discrimination via negative sentiment. This visual also brings out slight differences in negative sentiment prediction. Identity terms like Scottish have lower correlations across the board, manifesting that this identity has slightly less negative sentiment than the rest of the identities. This is important to analyze to get a broader context for how various identities could receive different amounts of discrimination stemming from the word embedding bias. (a) GloVe Fairness Correlation Heatmap (b) ConceptNet Fairness Correlation Heatmap Figure 4: National origin correlation matrix for negative sentiment prediction using GloVe (a) and ConceptNet (b) word embeddings. We can use these figures to analyze how certain groups could be similarly discriminated against via their negative sentiment correlation. 5 Discussion We showed how our framework can be used in the religious and national origin case studies. In practice, our framework should be used to measure bias among demographics of interest for the NLP application in question. Our RNSB metric is a useful signal a practitioner can use to choose the embedding model with the least amount of risk for discrimination in their application, or even to evaluate what types of unintended biases exists in their training corpora. We used our framework to evaluate unintended bias with respect to sentiment, but there exists many other types of unintended demographic bias to create clear signals for in word embeddings. 6 Conclusion We presented a transparent framework for evaluating unintended demographic bias in word embeddings. For this work our scope was limited to unfair biases with respect to negative sentiment. In our framework, we train a classifier on an unbiased positive/negative word sentiment dataset initialized with biased word embeddings. This way, we can observe the unfairness in the word embeddings at the ML prediction level. This allows us to observe clearer signals of bias in our metric, Relative Negative Sentiment Bias (RNSB). Previous metrics and analysis into unintended bias in word embeddings rely on vector space arguments for only two demographics at a time, which does not lend itself well to evaluating real world discrimination. Our metric has a direct connection to discrimination and can evaluate any number of demographics in a protected group. Finally, our framework and metric reveal transparent analysis of the unintended bias hidden in word embeddings. Acknowledgments This work was made possible in part through support of the United States Agency for International Development. The opinions expressed herein are those of the authors and do not necessarily reflect the views of the United States Agency for International Development or the US Government. References Amir Bakarov. 2018. A survey of word embeddings evaluation methods. arXiv preprint arXiv:1801.09536. Su Lin Blodgett and Brendan O’Connor. 2017. Racial disparity in natural language processing: A case study of social media african-american english. FATML. Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016a. Quantifying and reducing stereotypes in word embeddings. ICML. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016b. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In NIPS, pages 4349–4357. Aylin Caliskan-Islam, Joanna J Bryson, and Arvind Narayanan. 2016. Semantics derived automatically from language corpora necessarily contain human biases. Science, pages 1–14. Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. Measuring and mitigating unintended bias in text classification. In AAAI. Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16). Dirk Hovy and Shannon L. Spruit. 2016. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 591–598. Association for Computational Linguistics. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In ACM, pages 168–177. Mohit Iyyer, Peter Enns, Jordan Boyd-Graber, and Philip Resnik. 2014. Political ideology detection using recursive neural networks. In ACL, volume 1, pages 1113–1122. Svetlana Kiritchenko and Saif M Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. Proceedings of the 7thJoint Conference on Lexical and Computational Se-mantics(*SEM), New Orleans, USA. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis lectures on human language technologies, 5(1):1–167. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111–3119. Maryam Najafian, Wei-Ning Hsu, Ahmed Ali, and James Glass. 2017. Automatic speech recognition of arabic multi-genre broadcast media. In ASRU, pages 353–359. Maryam Najafian, Saeid Safavi, John HL Hansen, and Martin Russell. 2016. Improving speech recognition using limited accent diverse british english training data with deep neural networks. In MLSP, pages 1– 6. Ben Packer, Yoni Halpern, Mario Guajardo-Cspedes, and Margaret Mitchell. 2018. Text embedding models contain bias. here’s why that matters. Google Developers. Bo Pang, Lillian Lee, et al. 2008. Opinion mining and sentiment analysis. FTIR, 2(1–2):1–135. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Marta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2013. Linguistic models for analyzing and detecting biased language. In ACL, volume 1, pages 1650–1659. Robyn Speer. 2017. Conceptnet numberbatch 17.04: better, less-stereotyped word vectors. ConceptNet. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In AAAI, pages 4444–4451. Rachael Tatman. 2017. Gender and dialect bias in youtube’s automatic captions. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 53–59. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In EMNLP, pages 347– 354. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. AIES. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. EMNLP.
2019
162
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668–1678 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1668 The Risk of Racial Bias in Hate Speech Detection Maarten Sap♦ Dallas Card♣ Saadia Gabriel♦ Yejin Choi♦♥ Noah A. Smith♦♥ ♦Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, USA ♣Machine Learning Department, Carnegie Mellon University, Pittsburgh, USA ♥Allen Institute for Artificial Intelligence, Seattle, USA [email protected] Abstract We investigate how annotators’ insensitivity to differences in dialect can lead to racial bias in automatic hate speech detection models, potentially amplifying harm against minority populations. We first uncover unexpected correlations between surface markers of African American English (AAE) and ratings of toxicity in several widely-used hate speech datasets. Then, we show that models trained on these corpora acquire and propagate these biases, such that AAE tweets and tweets by self-identified African Americans are up to two times more likely to be labelled as offensive compared to others. Finally, we propose dialect and race priming as ways to reduce the racial bias in annotation, showing that when annotators are made explicitly aware of an AAE tweet’s dialect they are significantly less likely to label the tweet as offensive. 1 Introduction Toxic language (e.g., hate speech, abusive speech, or other offensive speech) primarily targets members of minority groups and can catalyze reallife violence towards them (O’Keeffe et al., 2011; Cleland, 2014; Mozur, 2018). Social media platforms are under increasing pressure to respond (Trindade, 2018), but automated removal of such content risks further suppressing alreadymarginalized voices (Yasin, 2018; Dixon et al., 2018). Thus, great care is needed when developing automatic toxic language identification tools. The task is especially challenging because what is considered toxic inherently depends on social context (e.g., speaker’s identity or dialect). Indeed, terms previously used to disparage communities (e.g., “n*gga”, “queer”) have been reclaimed by those communities while remaining offensive when used by outsiders (Rahman, 2012). Figure 1 illustrates how phrases in the African American English dialect (AAE) are labelled by a publicly available toxicity detection tool as much crowdsourcing PerspectiveAPI Toxicity score I saw him yesterday. What's up, bro! I saw his ass yesterday. 95% 6% Wussup, n*gga! 90% 7% Wussup, n*gga! classifier Non-toxic tweets (per Spears, 1998) Figure 1: Phrases in African American English (AAE), their non-AAE equivalents (from Spears, 1998), and toxicity scores from PerspectiveAPI.com. Perspective is a tool from Jigsaw/Alphabet that uses a convolutional neural network to detect toxic language, trained on crowdsourced data where annotators were asked to label the toxicity of text without metadata. more toxic than general American English equivalents, despite their being understood as non-toxic by AAE speakers (Spears, 1998, see §2). In this work, we first empirically characterize the racial bias present in several widely used Twitter corpora annotated for toxic content, and quantify the propagation of this bias through models trained on them (§3). We establish strong associations between AAE markers (e.g., “n*ggas”, “ass”) and toxicity annotations, and show that models acquire and replicate this bias: in other corpora, tweets inferred to be in AAE and tweets from self-identifying African American users are more likely to be classified as offensive. Second, through an annotation study, we introduce a way of mitigating annotator bias through dialect and race priming. Specifically, by designing tasks that explicitly highlight the inferred dialect of a tweet or likely racial background of its author, we show that annotators are significantly less likely to label an AAE tweet as offensive than when not shown this information (§4). 1669 Our findings show that existing approaches to toxic language detection have racial biases, and that text alone does not determine offensiveness. Therefore, we encourage paying greater attention to the confounding effects of dialect and a speaker’s social identity (e.g., race) so as to avoid unintended negative impacts. 2 Race and Dialect on Social Media Since previous research has exposed the potential for other identity-based biases in offensive language detection (e.g., gender bias; Park et al., 2018), here we investigate racial bias against speech by African Americans, focusing on Twitter as it is a particularly important space for Black activism (Williams and Domoszlai, 2013; Freelon et al., 2016; Anderson et al., 2018). Race is a complex, multi-faceted social construct (Sen and Wasow, 2016) that has correlations with geography, status, dialect, and more. As Twitter accounts typically do not have self-reported race information, researchers rely on various correlates of race as proxies. We use the African American English dialect (AAE) as a proxy for race. AAE is a widely used dialect of English that is common among, but not unique to, those who identify as African American,1 and is often used in written form on social media to signal a cultural identity (Green, 2002; Edwards, 2004; Florini, 2014). Dialect estimation In this work, we infer dialect using a lexical detector of words associated with AAE or white-aligned English. We use the topic model from Blodgett et al. (2016), which was trained on 60M geolocated tweets and relies on US census race/ethnicity data as topics. The model yields probabilities of a tweet being AAE (pAAE) or White-aligned English (pwhite).2 3 Biases in Toxic Language Datasets To understand the racial and dialectic bias in toxic language detection, we focus our analyses on two corpora of tweets (Davidson et al., 2017; Founta et al., 2018) that are widely used in hate speech detection (Park et al., 2018; van Aken et al., 2018; Kapoor et al., 2018; Alorainy et al., 2018; Lee 1Of course, many African Americans might not use AAE in every context, or at all. For further discussion of AAE, please refer to Blodgett et al. (2016). 2The model yields AAE, Hispanic, Asian/Other and White-aligned dialect probabilities, but for the purpose of our study we only focus on AAE and White-aligned dialects. category count AAE corr. DWMW17 hate speech 1,430 −0.057 offensive 19,190 0.420 none 4,163 −0.414 total 24,783 FDCL18 hateful 4,965 0.141 abusive 27,150 0.355 spam 14,030 −0.102 none 53,851 −0.307 total 99,996 Table 1: Number of tweets in each category, and correlation with AAE (Pearson r, p ≪0.001). We assign tweets to categories based on the label for FDCL18, and majority class for DWMW17. Correlations are colored for interpretability. et al., 2018; Waseem et al., 2018).3 Different protocols were used to collect the tweets in these corpora, but both were annotated by Figure-Eight4 crowdworkers for various types of toxic language, shown in Table 1. DWMW17 (Davidson et al., 2017) includes annotations of 25K tweets as hate speech, offensive (but not hate speech), or none. The authors collected data from Twitter, starting with 1,000 terms from HateBase (an online database of hate speech terms) as seeds, and crowdsourced at least three annotations per tweet. FDCL18 (Founta et al., 2018) collects 100K tweets annotated with four labels: hateful, abusive, spam or none. Authors used a bootstrapping approach to sampling tweets, which were then labelled by five crowdsource workers. 3.1 Data Bias To quantify the racial bias that can arise during the annotation process, we investigate the correlation between toxicity annotations and dialect probabilities given by Blodgett et al. (2016). Table 1 shows the Pearson r correlation between pAAE and each toxicity category. For both datasets, we uncover strong associations between 3Our findings also hold for the widely used data from Waseem and Hovy (2016). However, because of severe limitations of that dataset (see Schmidt and Wiegand, 2017; Klubika and Fernandez, 2018), we relegate those analyses to supplementary (§A.3). 4www.figure-eight.com 1670 Within dataset proportions Proportions on DEMOGRAPHIC16 Proportions on USERLEVELRACE18 DWMW17 % false identification Group Acc. None Offensive Hate AAE 94.3 1.1 46.3 0.8 White 87.5 7.9 9.0 3.8 Overall 91.4 2.9 17.9 2.3 % false identification Group Acc. None Abusive Hateful AAE 81.4 4.2 26.0 1.7 White 82.7 30.5 4.5 0.8 Overall 81.4 20.9 6.6 0.8 0 25 50 75 100 AAE White Overall Dialect 58.1 38.7 79.3 18.5 74.0 23.3 None Offensive Hate 0 25 50 75 100 AA White Overall Self-reported race 77.1 20.0 84.2 13.5 83.0 14.5 None Offensive Hate FDCL18 0 25 50 75 100 AAE White Overall Dialect 56.8 24.6 77.9 11.4 72.1 14.4 Spam None Abusive Hateful 0 25 50 75 100 AA White Overall Self-reported race 70.6 10.8 75.5 7.4 74.6 7.9 Spam None Abusive Hateful Figure 2: Left: classification accuracy and per-class rates of false positives (FP) on test data for models trained on DWMW17 and FDCL18, where the group with highest rate of FP is bolded. Middle and right: average probability mass of toxicity classes in DEMOGRAPHIC16 and USERLEVELRACE18, respectively, as given by classifiers trained on DWMW17 (top) and FDCL18 (bottom). Proportions are shown for AAE, White-aligned English, and overall (all tweets) for DEMOGRAPHIC16, and for self-identified White authors, African American authors (AA), and overall for USERLEVELRACE18. inferred AAE dialect and various hate speech categories, specifically the “offensive” label from DWMW17 (r = 0.42) and the “abusive” label from FDCL18 (r = 0.35), providing evidence that dialect-based bias is present in these corpora. As additional analyses, we examine the interaction between unigrams indicative of dialect and hate speech categories, shown in §A.1. 3.2 Bias Propagation through Models To further quantify the impact of racial biases in hate speech detection, we investigate how these biases are acquired by predictive models. First, we report differences in rates of false positives (FP) between AAE and White-aligned dialect groups for models trained on DWMW17 or FDCL18. Then, we apply these models to two reference Twitter corpora, described below, and compute average rates of reported toxicity, showing how these biases generalize to other data.5 DEMOGRAPHIC16 (Blodgett et al., 2016) contains 56M tweets (2.8M users) with dialect estimated using a demographic-aware topic model that leverages census race/ethnicity data and geocoordinates of the user profile. As recommended, we assign dialect labels to tweets with dialect probabilities greater than 80%. 5We assume a priori that the average tweet is not inherently more toxic in a particular dialect. Assessing the veracity of this assumption requires a deep understanding of sociocultural norms of profane and toxic speech. USERLEVELRACE18 (Preot¸iuc-Pietro and Ungar, 2018) is a corpus of 5.4M tweets, collected from 4,132 survey participants (3,184 White, 374 AA) who reported their race/ethnicity and Twitter user handle. For this dataset, we compare differences in toxicity predictions by self-reported race, instead of inferring message-level dialect.6 For each of the two toxic language corpora, we train a classifier to predict the toxicity label of a tweet. Using a basic neural attention architecture (Wang et al., 2016; Yang et al., 2016), we train a classifier initialized with GloVe vectors (Pennington et al., 2014) to minimize the cross-entropy of the annotated class conditional on text, x: p(class | x) ∝exp(Woh + bo), (1) with h = f(x), where f is a BiLSTM with attention, followed by a projection layer to encode the tweets into an H-dimensional vector.7 We refer the reader to the appendix for experimental details and hyperparameters (§A.2). Results Figure 2 (left) shows that while both models achieve high accuracy, the false positive rates (FPR) differ across groups for several toxicity labels. The DWMW17 classifier predicts almost 50% of non-offensive AAE tweets as being offensive, and FDCL18 classifier shows higher FPR for 6Note that lexical dialect inferences of AAE (pAAE) significantly correlate with both the AAE group from DEMOGRAPHIC16 (Pearson r = 0.61, p ≪0.001) and self-reported AA race from USERLEVELRACE18 (Pearson r = 0.21, p ≪ 0.001). 7In preliminary experiments, our findings held regardless of our choice of classifier. 1671 the “Abusive” and “Hateful” categories for AAE tweets. Additionally, both classifiers show strong tendencies to label White tweets as “none”. These discrepancies in FPR across groups violate the equality of opportunity criterion, indicating discriminatory impact (Hardt et al., 2016). We further quantify this potential discrimination in our two reference Twitter corpora. Figure 2 (middle and right) shows that the proportions of tweets classified as toxic also differ by group in these corpora. Specifically, in DEMOGRAPHIC16, AAE tweets are more than twice as likely to be labelled as “offensive” or “abusive” (by classifiers trained on DWMW17 and FDCL18, respectively). We show similar effects on USERLEVELRACE18, where tweets by African American authors are 1.5 times more likely to be labelled “offensive”. Our findings corroborate the existence of racial bias in the toxic language datasets and confirm that models propagate this bias when trained on them.8 4 Effect of Dialect To study the effect of dialect information on ratings of offensiveness, we run a small controlled experiment on Amazon Mechanical Turk where we prime annotators to consider the dialect and race of Twitter users. We ask workers to determine whether a tweet (a) is offensive to them, and (b) could be seen as offensive to anyone. In the dialect priming condition, we explicitly include the tweet’s dialect as measured by Blodgett et al. (2016), as well as extra instructions priming workers to think of tweet dialect as a proxy for the author’s race. In the race priming condition, we encourage workers to consider the likely racial background of a tweet’s author, based on its inferred dialect (e.g., an AAE tweet is likely authored by an African American Twitter user; see §A.5 for the task instructions). For all tasks, we ask annotators to optionally report gender, age, race, and political leaning.9 With a distinct set of workers for each condition, we gather five annotations apiece for a sample of 1,351 tweets stratified by dialect, toxicity category, and dataset (DWMW17 and FDCL18).10 8As noted by Chung (2019), the PerspectiveAPI displays similar racial biases shown in the appendix (§A.4). 9This study was approved by the Institutional Review Board (IRB) at the University of Washington. 10Annotations in the control setting agreed moderately with toxicity labels in DWMW17 and FDCL18 (Pearson r = 0.592 and r = 0.331, respectively; p ≪0.001). 67.0 64.0 60.6 41.4 44.1 32.3 15.0 12.8 12.5 28.4 22.7 25.1 18.0 23.2 26.9 30.1 33.1 42.5 0 20 40 60 80 100 race dialect control race dialect control offensive to you offensive to anyone no maybe yes Figure 3: Proportion (in %) of offensiveness annotations of AAE tweets in control, dialect, and race priming conditions. Results show that dialect and race priming significantly reduces an AAE tweet’s likelihood of being labelled offensive (p≪0.001). Despite the inherent subjectivity of these questions, workers frequently agreed about a tweet being offensive to anyone (76% pairwise agreement, κ = 0.48) or to themselves (74% p.a., κ = 0.30). Results Figure 3 shows that priming workers to think about dialect and race makes them significantly less likely to label an AAE tweet as (potentially) offensive to anyone. Additionally, race priming makes workers less likely to find AAE tweets offensive to them. To confirm these effects, we compare the means of the control condition and treatment conditions,11 and test significance with a t test. When rating offensiveness to anyone, the mean for control condition (Mc = 0.55) differs from dialect (Md = 0.44) and race (Mr = 0.44) conditions significantly (p ≪0.001). For ratings of offensiveness to workers, only the difference in means for control (Mc = 0.33) and race (Md =0.25) conditions is significant (p ≪0.001). Additionally, we find that overall, annotators are substantially more likely to rate a tweet as being offensive to someone, than to rate it as offensive to themselves, suggesting that people recognize the subjectivity of offensive language. Our experiment provide insight into racial bias in annotations and shows the potential for reducing it, but several limitations apply, including the skewed demographics of our worker pool (75% self-reported White). Additionally, research suggests that motivations to not seem prejudiced 11We convert the offensiveness labels to real numbers (0: “no”, 0.5: “maybe”, 1: “yes”). 1672 could buffer stereotype use, which could in turn influence annotator responses (Plant and Devine, 1998; Moskowitz and Li, 2011). 5 Related Work A robust body of work has emerged trying to address the problem of hate speech and abusive language on social media (Schmidt and Wiegand, 2017). Many datasets have been created, but most are either small-scale pilots (∼100 instances; Kwok and Wang, 2013; Burnap and Williams, 2015; Zhang et al., 2018), or focus on other domains (e.g., Wikipedia edits; Wulczyn et al., 2017). In addition to DWMW17 and FDCL18, published Twitter corpora include Golbeck et al. (2017), which uses a somewhat restrictive definition of abuse, and Ribeiro et al. (2018), which is focused on network features, rather than text. Past work on bias in hate speech datasets has exclusively focused on finding and removing bias against explicit identity mentions (e.g., woman, atheist, queer; Park and Fung, 2017; Dixon et al., 2018). In contrast, our work shows how insensitivity to dialect can lead to discrimination against minorities, even without explicit identity mentions. 6 Conclusion We analyze racial bias in widely-used corpora of annotated toxic language, establishing correlations between annotations of offensiveness and the African American English (AAE) dialect. We show that models trained on these corpora propagate these biases, as AAE tweets are twice as likely to be labelled offensive compared to others. Finally, we introduce dialect and race priming, two ways to reduce annotator bias by highlighting the dialect of a tweet in the data annotation, and show that it significantly decreases the likelihood of AAE tweets being labelled as offensive. We find strong evidence that extra attention should be paid to the confounding effects of dialect so as to avoid unintended racial biases in hate speech detection. Acknowledgments The authors thank Dan Jurafsky, Emily Bender, Emily Gade, Tal August, Wesley McClean, Victor Zhong, and Laura Vianna, as well as anonymous reviewers, for helpful feedback. This work was in part supported by NSF grant IIS-1714566. References Betty van Aken, Julian Risch, Ralf Krestel, and Alexander L¨oser. 2018. Challenges for toxic comment classification: An in-depth error analysis. CoRR, abs/1809.07572. Wafa Alorainy, Pete Burnap, Han Liu, and Matthew Williams. 2018. Cyber hate classification: ’othering’ language and paragraph embedding. CoRR, abs/1801.07495. Monica Anderson, Skye Toor, Lee Rainie, and Aaron Smith. 2018. Activism in the social media ages. http://www.pewinternet.org/ 2018/07/11/activism-in-the-socialmedia-age/. Accessed: 2019-03-01. Su Lin Blodgett, Lisa Green, and Brendan O’Connor. 2016. Demographic dialectal variation in social media: A case study of African-American english. In EMNLP. Pete Burnap and Matthew L. Williams. 2015. Cyber hate speech on Twitter: An application of machine classification and statistical modeling for policy and decision making. Policy & Internet, 7:223–242. Anna Chung. 2019. How automated tools discriminate against black language. https:// onezero.medium.com/how-automatedtools-discriminate-against-blacklanguage-2ac8eab8d6db. Accessed: 201903-02. Jamie Cleland. 2014. Racism, football fans, and online message boards: How social media has added a new dimension to racist discourse in English football. J. Sport Soc. Issues, 38(5):415–431. Thomas Davidson, Dana Warmsley, Michael W. Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In ICWSM. Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigating unintended bias in text classification. In Proceedings of Conference on AI, Ethics, and Society. Walter F. Edwards. 2004. African American Vernacular English: phonology. In A Handbook of Varieties of English: Morphology and Syntax. Sarah Florini. 2014. Tweets, tweeps, and signifyin’: Communication and cultural performance on “Black Twitter”. Television & New Media, 15(3):223–237. Antigoni-Maria Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. In ICWSM. 1673 Deen Freelon, Charlton D. McIlwain, and Meredith D. Clark. 2016. Beyond the hashtags. http://cmsimpact.org/wpcontent/uploads/2016/03/beyond_ the_hashtags_2016.pdf. Accessed: 201903-01. Jennifer Golbeck, Zahra Ashktorab, Rashad O. Banjo, Alexandra Berlinger, Siddharth Bhagwan, Cody Buntain, Paul Cheakalos, Alicia A. Geller, Quint Gergory, Rajesh Kumar Gnanasekaran, Raja Rajan Gunasekaran, Kelly M. Hoffman, Jenny Hottle, Vichita Jienjitlert, Shivika Khare, Ryan Lau, Marianna J. Martindale, Shalmali Naik, Heather L. Nixon, Piyush Ramachandran, Kristine M. Rogers, Lisa Rogers, Meghna Sardana Sarin, Gaurav Shahane, Jayanee Thanki, Priyanka Vengataraman, Zijian Wan, and Derek Michael Wu. 2017. A large labeled corpus for online harassment research. In WebSci, pages 229–233. ACM. Lisa Green. 2002. African American English: A Linguistic Introduction, 8.3.2002 edition edition. Cambridge University Press. Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. In NeurIPS. Raghav Kapoor, Yaman Kumar, Kshitij Rajput, Rajiv Ratn Shah, Ponnurangam Kumaraguru, and Roger Zimmermann. 2018. Mind your language: Abuse and offense detection for code-switched languages. CoRR, abs/1809.08652. Filip Klubika and Raquel Fernandez. 2018. Examining a hate speech corpus for hate speech detection and popularity prediction. In LREC. Irene Kwok and Yuzhou Wang. 2013. Locate the hate: Detecting tweets against blacks. In AAAI. Younghun Lee, Seunghyun Yoon, and Kyomin Jung. 2018. Comparative studies of detecting abusive language on twitter. CoRR, abs/1808.10245. Gordon B. Moskowitz and Peizhong Li. 2011. Egalitarian goals trigger stereotype inhibition: A proactive form of stereotype control. J. Exp. Soc. Psychol., 47(1):103–116. Paul Mozur. 2018. A genocide incited on Facebook, with posts from Myanmar’s military. https://www.nytimes.com/2018/10/ 15/technology/myanmar-facebookgenocide.html. Accessed: 2018-12-6. Gwenn Schurgin O’Keeffe, Kathleen Clarke-Pearson, and Council on Communications and Media. 2011. The impact of social media on children, adolescents, and families. Pediatrics, 127(4):800–804. Ji Ho Park and Pascale Fung. 2017. One-step and twostep classification for abusive language detection on Twitter. In Proceedings of the Workshop on Abusive Language Online. Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Reducing gender bias in abusive language detection. In EMNLP. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In EMNLP. E. Ashby Plant and Patricia G. Devine. 1998. Internal and external motivation to respond without prejudice. J. Pers. Soc. Psychol., 75(3):811–832. Daniel Preot¸iuc-Pietro and Lyle Ungar. 2018. Userlevel race and ethnicity predictors from Twitter text. In COLING. Jacquelyn Rahman. 2012. The N word: Its history and use in the African American community. Journal of English Linguistics, 40(2):137–171. Manoel Horta Ribeiro, Pedro H. Calais, Yuri A. Santos, Virg´ılio A. F. Almeida, and Wagner Meira Jr. 2018. Characterizing and detecting hateful users on Twitter. In ICWSM. Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language processing. In Proceedings of the Workshop on NLP for Social Media. Maya Sen and Omar Wasow. 2016. Race as a bundle of sticks: Designs that estimate effects of seemingly immutable characteristics. Annual Review of Political Science, 19. Arthur K Spears. 1998. African-American language use: Ideology and so-called obscenity. In Salikoko S Mufwene, John R Rickford, Guy Bailey, and John Baugh, editors, African-American English: Structure, History and Use, pages 226–250. Routledge New York. Luiz Val´erio P Trindade. 2018. On the frontline: The rise of hate speech and racism on social media. https://discoversociety.org/ 2018/09/04/on-the-frontline-therise-of-hate-speech-and-racism-onsocial-media/. Accessed: 2018-12-6. Yequan Wang, Minlie Huang, xiaoyan zhu, and Li Zhao. 2016. Attention-based LSTM for aspectlevel sentiment classification. In EMNLP. Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? Predictive features for hate speech detection on Twitter. In NAACL Student Research Workshop. Zeerak Waseem, James Thorne, and Joachim Bingel. 2018. Bridging the gaps: Multi task learning for domain transfer of hate speech detection. In Jennifer Golbeck, editor, Online Harassment, pages 29–55. Springer International Publishing, Cham. Apryl Williams and Doris Domoszlai. 2013. BlackTwitter: a networked cultural identity. Harmony Institute. 1674 Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In WWW. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. 2016. Hierarchical attention networks for document classification. In NAACL. Danyaal Yasin. 2018. Black and banned: Who is free speech for? https: //www.indexoncensorship.org/2018/ 09/black-and-banned-who-is-freespeech-for/. Accessed: 2018-12-6. Ziqi Zhang, David Robinson, and Jonathan A. Tepper. 2018. Detecting hate speech on Twitter using a convolution-GRU based deep neural network. In Proceedings of ESWC. 1675 (FDCL18 – abusive) (FDCL18 – hateful) (DWMW17 – offensive) (DWMW17 – hate speech) Figure 4: Feature weights learned by l2-regularized multiclass logistic regression models with unigram features, plotted against pAAE for each term, based on Blodgett et al. (2016). Top: weights for predicting abusive (left) and hateful (right) from a model trained on FDCL18. Bottom: weights for predicting offensive (left) and hate speech (right) from a model trained on DWMW17. Labels are shown for the most heavily-weighted terms, with label size proportional to the log count of the term in validation data. Note: “c*nt”, “n*gger,” “f*ggot,” and their variations are considered sexist, racist, and homophobic slurs, respectively, and are predictive of hate speech DWMW17. A Appendix We present further evidence of racial bias in hate speech detection in this appendix. Disclaimer: due to the nature of this research, figures and tables contain potentially offensive or upsetting terms (e.g. racist, sexist, or homophobic slurs). We do not censor these terms, as they are illustrative of important features in the datasets. A.1 Lexical Exploration of Data Bias To better understand the correlations between inferred dialect and the annotated hate speech categories (abusive, offensive, etc.) we use simple linear models to look for influential terms. Specifically, we train l2-regularized multiclass logistic regression classifiers operating on unigram features for each of DWMW17 and FDCL18 (tuning the regularization strength on validation data). We then use the Blodgett et al. (2016) model to infer pAAE for each individual vocabulary term in isolation. While this does not completely explain the correlations observed in section §3.1, it does allow us to identify individual words that are both strongly associated with AAE, and highly predictive of particular categories. Figure 4 shows the feature weights and pAAE for each word in the models for FDCL18 (top) and DWMW17 (bottom), with the most highly weighted terms identified on the plots. The size of words indicates how common they are (proportional to the log of the number of times they appear in the corpus). These results reveal important limitations of these datasets, and illustrate the potential for discriminatory impact of any simple models trained on this data. First, and most obviously, the most highly weighted unigrams for predicting “hateful” in FDCL18 are “n*gga” and “n*ggas”, which are 1676 on DEMOGRAPHIC16 on USERLEVELRACE18 WH16 % false identification Group Acc. Racism Sexism None AAE 83.8 0.9 2.8 32.5 White 83.5 3.2 2.7 34.6 Overall 84.1 2.7 3.0 35.9 0 25 50 75 100 AAE White Overall Dialect 81.1 17.5 90.5 8.2 88.8 9.9 None Sexism Racism 0 25 50 75 100 AA White Overall Self-reported race 88.9 10.0 90.5 8.4 90.3 8.6 None Sexism Racism Figure 5: Left: classification accuracy and per-class rates of false positives (FP) on test data for the model trained on WH16. Middle and right: average probability mass of toxicity classes in DEMOGRAPHIC16 and USERLEVELRACE18, respectively, as given by the WH16 classifier. As in Figure 2, proportions are shown for AAE, Whitealigned English, and overall (all tweets) for DEMOGRAPHIC16, and for self-identified White authors, African American authors (AA), and overall for USERLEVELRACE18. strongly associated with AAE (and their offensiveness depends on speaker and context; Spears, 1998). Because these terms are both frequent and highly weighted, any simple model trained on this data would indiscriminately label large numbers of tweets containing either of these terms as “hateful”. By contrast, the terms that are highly predictive of “hate speech” in DWMW17 (i.e., slurs) partly reflect the HateBase lexicon used in constructing this dataset, and the resulting emphasis is different. (We also see artefacts of the dataset construction in the negative weights placed on “charlie”, “bird”, and “yankees” — terms which occur in HateBase, but have harmless primary meanings.) To verify that no single term is responsible for the correlations reported in section §3.1, we consider each word in the vocabulary in turn, and compute correlations excluding tweets containing that term. The results of this analysis (not shown) find that almost all of the correlations we observe are robust. For example, the correlation between pAAE and “abusive” in FDCL18 increases the most if we drop tweets containing “fucking” (highly positively weighted, but non-AAE aligned), and decreases slightly if we drop terms like “ass” or “bitch”. The one exception is the correlation between “hateful” and pAAE in FDCL18: if we exclude tweets which contain “n*gga” or “n*ggas”, the correlation drops to r=0.047. However, this also causes the correlation between pAAE and “abusive” to increase to r=0.376. A.2 Experimental Details for Classification For each dataset, we randomly split the data into train/dev./test sets (73/12/15%), and perform early stopping when classification accuracy on dev. data stops increasing. For DWMW17, which has multicategory count AAE corr. racism 1,976 −0.117 sexism 3,430 0.168 none 11,501 −0.064 total 16,907 Table 2: Data statistics in WH16, as well as the Pearson r correlations with the labels and inferred AAE dialect. All correlations are p ≪0.001. ple annotations per instance, we use the majority class as the label, dropping instances that are tied. For both datasets, we preprocess the text using an adapted version of the script for Twitter GloVe vectors.12 In our experiments, we set H = 64, and use a vocabulary size of |V | = 19k and |V | = 74k for DWMW17 and FDCL18, respectively, and initialize the embedding layer with 300-dimensional GloVe vectors trained on 840 billion tokens. We experimented with using ELMo embeddings, but found that they did not boost performance for this task. We optimize these models using Adam with a learning rate of 0.001, and a batch size of 64. A.3 Bias in Waseem and Hovy (2016) We replicate our analyses in §3 on the widely used dataset by Waseem and Hovy (2016, henceforth, WH16), which categorizes tweets in three hate speech categories: racist, sexist, or none, shown in Table 2, along with their correlations with AAE. This dataset suffers from severe sampling bias that limit the conclusions to be drawn from this data: 70% of sexist tweets were written by two users, and 99% of racist tweets were written by a single user (Schmidt and Wiegand, 2017; Klubika and Fernandez, 2018). 12https://nlp.stanford.edu/projects/ glove/preprocess-twitter.rb 1677 Full Instructions (Expand/Collapse) You will read a tweet, and describe whether it could be considered toxic/disrespectful, to you or to anyone. Note: we will assume that MTurk workers only have good intentions when annotating these posts. A note on race/ethnicity of the tweet author We also provide an estimate of the tweet dialect, as determined by an AI system. Previous research has showed that dialects of English are strongly associated to a speaker's racial or ethnic identity. Additionally, certain words are usually less toxic when used by a minority (e.g., the word "n*gga" or the suffix "-ass" are considered harmless in African American English), therefore it's useful to know the dialect a tweet is in before labelling it for toxic content. Our AI system detects the following dialects: General American English (gen Eng): associated with generic newscaster English. African-American English (Afr-Am Eng): dialect spoken usually by African-American or Black folks. Latino American English (Lat Eng): dialect spoken usually by Latino/a folks both in New York and California, Texas, Chicago, etc. (dialect priming) Instructions Read a potentially toxic post from the internet and tell us why it's toxic (this should take approx. 5 minutes). Note: You can complete as many HIT's in this batch as you want! But if your responses tend to be very different from what we're looking for, we might put a quota on the number of HIT's you can do in future batches. Also note: this is a pilot task, more HITs will be available in the future. Participation restriction: providers/turkers for this task cannot currently be employed by or a student at the University of Washington. Full Instructions (Expand/Collapse) You will read a tweet, and describe whether it could be considered toxic/disrespectful, to you or to anyone. Note: we will assume that MTurk workers only have good intentions when annotating these posts. A note on race/ethnicity of the tweet author We also provide an estimate of the Twitter user's race or ethnicity, as inferred by our AI system. Note that certain words are usually less toxic when used by a minority (e.g., the word "n*gga" or the suffix "-ass" are considered harmless when spoken by Black folks), therefore it's useful to know the identity of a Tweeter before labelling it for toxic content. Annotation instructions 1.a) Tell us whether this tweet seems toxic/hateful/disrespectful to you. Our purpose is to understand how disrespect/offense can show up in language, we are not making statements about the actual content of the posts. 1.b) Considering a wide set of perspectives, tell us whether this could be considered toxic/hateful/disrespectful to others. Try to answer this questions while considering a broad set of people from different backgrounds, not just your own. 1.c) Tell us whether the tweet was intentionally offensive or not. It can be hard to infer the intent behind a statement, but sometimes posts are clearly offensive jokes, insults, snobism, condescension, profanity, back-handed compliments, name calling, bullying, intimidation, or aggression. 2) If the post contains sexual content (explicitly or innuendo), explain which part. Sexual content can be used in disrespectful language, either overtly or hidden. Use the first text box to describe which parts of the post contain euphemism, double entendre or explicit sexual content. Then, use the second text box to explain why you answered this; try to explain what the phrase means, what it refers to, what the double-entendre is about, etc. 3) Indicate your gender, age, race, political leaning, and whether you identify as a minority (this will remain confidential). Your own personal background and experiences influence what you think of as disrespectful or offensive. We collect this information to account for all types of backgrounds that MTurkers come from in our research. If you answered this question once, you can skip it in subsequent HITs. Background on our research project At the University of Washington, we're passionate about understanding how potentially toxic or disrespectful language or stereotypes can be used against certain demographics/groups of people (e.g. racism, sexism, etc.). Although there is no direct benefit to you for participating, we very much appreciate your help in identifying and explaining such language/stereotypes, since this is something computational models have no clue about. We do not agree with any of the content/stereotypes presented to you, but it's important that we gather these annotations for research purposes. Data collection & sharing We will not ask you for your name, and the data collected in this study will be made unidentifiable to the best of our extent. We will securely store the data on our servers and only share with qualified researchers (e.g. who want to further the study of hate speech detection). If you later decide that you do not want your responses included in this study, please email so we can exclude your work. If you have questions about your rights as a research participant, or wish to obtain information, ask questions or discuss any concerns about this study with someone other than the researcher(s), please contact the University of Washington Human Subjects Division at 206-543-0098 (for international calls include the US Calling Code: +1-206-543-0098). Content Warning: posts were found on the (uncensored) internet; while it's crucial for us to annotate them, we do not endorse any of the stereotypes or offensive/immoral/rude material. You may find some of the content upsetting. If you have concerns, questions, or strong negative reactions to some of the content, please either email us (Maarten Sap at [email protected], or Professor Yejin Choi at [email protected]) or reach out if in crisis. Examples [-] less examples Sentence Race/Ethnicity Toxic Intentional Sex You only got the job because you're a woman. White Yes Yes No The movie with the all-muslim cast was a box office bomb. White Yes Probably No I got my black ass handed to me during this basketball game. Black No No No A white woman called police on black people barbecuing. White No No (statement) No This nigga tried to call me three times during my shift smh Black No No No You are a MORON. White Yes Yes No Cause of dem stupid traffic lights I almost miss the beginning of my shift lololol Latino/Latina Yes Yes No Wh t d t ll t i d f i i t Whit Y Y N (race priming) Figure 6: Additional instructions shown to workers in the dialect and race priming. In the dialect condition, we provide links to the dialect Wikipedia pages. In Figure 5 (left), we show how models trained on this dataset have slightly higher false positive rates of sexism on AAE tweets, and of the “none” label for White tweets compared to AAE tweets. When predicting on our reference corpora (Figure 5, middle and right), we see AAE tweets (or tweets by African Americans) are labelled as sexist more than White-aligned tweets or tweets by White users. Again, due to the sampling issues, these results should be interpreted cautiously. A.4 Toxicity and AAE in the PerspectiveAPI We compare the toxicity detection rates from our trained models to those of PerspectiveAPI,13 a Jigsaw/Alphabet initiative to detect hate speech online. We obtain TOXICITY scores for all tweets in DWMW17 and FDCL18, as well as for 100K random tweets from DEMOGRAPHIC16 and USERLEVELRACE18. We show correlations (Pearson r) between dialects/groups in our datasets and the Perspective TOXICITY scores. All correlations are significant (p ≪0.001, Holm-corrected for multiple comparisons) which indicates potential racial bias for all datasets. dataset dialect/group TOXICITY corr. DWMW17 White −0.320 AAE 0.310 FDCL18 White −0.340 AAE 0.453 DEMOGRAPHIC16 White −0.096 AAE 0.056 USERLEVELRACE18 White −0.046 AA 0.042 13The API (http://perspectiveapi.com) was accessed in December 2018 A.5 Dialect Priming Experimental Details We collected annotations from 110 (76% White), 143 (77% White), and 81 (72% White) workers in the control, dialect, and race priming conditions, respectively. Figure 6 shows the instruction snippet related to dialect and race shown to workers in the two treatment conditions. Additionally, Figure 7 shows the annotation interface, with (a) and without priming (b,c). 1678 A Twitter user tweeted: I swear I saw him yesterday. 1.a) Does this post seem offensive/disrespectful to you? Yes Maybe No Post doesn't make sense/is just a link 1.b) Could this post be considered offensive/disrespectful to anyone? Yes Maybe No (a) A Twitter user tweeted: I swear I saw his ass yesterday. which our AI system thinks is in African American English. The AI prediction seems wrong. 1.a) Does this post seem offensive or disrespectful to you? Yes Maybe No Post doesn't make sense/is just a link 1.b) Could this post be considered offensive or disrespectful to anyone? Yes Maybe No (b) A Twitter user that is likely Black/African American tweeted: I swear I saw his ass yesterday. The AI prediction for the user's race/ethnicity seems wrong. 1.a) Does this post seem offensive/disrespectful to you? Yes Maybe No Post doesn't make sense/is just a link 1.b) Could this post be considered offensive/disrespectful to anyone? Yes Maybe No (c) Figure 7: Interface for the controlled experiment. (a) shows the control condition along with the offensiveness questions. (b) and (c) show the changes to the treatment interface in the dialect and race priming conditions.
2019
163
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679–1684 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1679 Evaluating Gender Bias in Machine Translation Gabriel Stanovsky1,2, Noah A. Smith1,2, and Luke Zettlemoyer1 1Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, USA 2Allen Institute for Artificial Intelligence, Seattle, USA {gabis,nasmith,lsz}@cs.washington.edu Abstract We present the first challenge set and evaluation protocol for the analysis of gender bias in machine translation (MT). Our approach uses two recent coreference resolution datasets composed of English sentences which cast participants into non-stereotypical gender roles (e.g., “The doctor asked the nurse to help her in the operation”). We devise an automatic gender bias evaluation method for eight target languages with grammatical gender, based on morphological analysis (e.g., the use of female inflection for the word “doctor”). Our analyses show that four popular industrial MT systems and two recent state-of-the-art academic MT models are significantly prone to gender-biased translation errors for all tested target languages. Our data and code are publicly available at https://github.com/ gabrielStanovsky/mt_gender. 1 Introduction Learned models exhibit social bias when their training data encode stereotypes not relevant for the task, but the correlations are picked up anyway. Notable examples include gender biases in visual SRL (cooking is stereotypically done by women, construction workers are stereotypically men; Zhao et al., 2017), lexical semantics (“man is to computer programmer as woman is to homemaker”; Bolukbasi et al., 2016), and natural language inference (associating women with gossiping and men with guitars; Rudinger et al., 2017). In this work, we conduct the first large-scale multilingual evaluation of gender-bias in machine translation (MT), following recent small-scale qualitative studies which observed that online MT services, such as Google Translate or Microsoft Translator, also exhibit biases, e.g., translating nurses as females and programmers as males, regardless of context (Alvarez-Melis and Jaakkola, The doctor asked the nurse to help her in the procedure El doctor le pidio a la enfermera que le ayudara con el procedimiento Figure 1: An example of gender bias in machine translation from English (top) to Spanish (bottom). In the English source sentence, the nurse’s gender is unknown, while the coreference link with “her” identifies the “doctor” as a female. On the other hand, the Spanish target sentence uses morphological features for gender: “el doctor” (male), versus “la enfermera” (female). Aligning between source and target sentences reveals that a stereotypical assignment of gender roles changed the meaning of the translated sentence by changing the doctor’s gender. 2017; Font and Costa-Juss`a, 2019). Google Translate recently tried to mitigate these biases by allowing users to sometimes choose between gendered translations (Kuczmarski, 2018). As shown in Figure 1, we use data introduced by two recent coreference gender-bias studies: the Winogender (Rudinger et al., 2018), and the WinoBias (Zhao et al., 2018) datasets. Following the Winograd schema (Levesque, 2011), each instance in these datasets is an English sentence which describes a scenario with human entities, who are identified by their role (e.g., “the doctor” and “the nurse” in Figure 1), and a pronoun (“her” in the example), which needs to be correctly resolved to one of the entities (“the doctor” in this case). Rudinger et al. (2018) and Zhao et al. (2018) found that while human agreement on the task was high (roughly 95%), coreference resolution models often ignore context and make socially biased predictions, e.g., associating the feminine pronoun “her” with the stereotypically female “nurse.” We observe that for many target languages, a faithful translation requires a similar form of (at 1680 least implicit) gender identification. In addition, in the many languages which associate between biological and grammatical gender (e.g., most Romance, Germanic, Slavic, and Semitic languages; Craig, 1986; Mucchi-Faina, 2005; Corbett, 2007), the gender of an animate object can be identified via morphological markers. For instance, when translating our running example in Figure 1 to Spanish, a valid translation may be: “La doctora le pidio a la enfermera que le ayudara con el procedimiento,” which indicates that the doctor is a woman, by using a feminine suffix inflection (“doctora”) and the feminine definite gendered article (“la”). However, a biased translation system may ignore the given context and stereotypically translate the doctor as male, as shown at the bottom of the figure. Following these observations, we design a challenge set approach for evaluating gender bias in MT using a concatenation of Winogender and WinoBias. We devise an automatic translation evaluation method for eight diverse target languages, without requiring additional gold translations, relying instead on automatic measures for alignment and morphological analysis (Section 2). We find that four widely used commercial MT systems and two recent state-of-the-art academic models are significantly gender-biased on all tested languages (Section 3). Our method and benchmarks are publicly available, and are easily extensible with more languages and MT models. 2 Challenge Set for Gender Bias in MT We compose a challenge set for gender bias in MT (which we dub “WinoMT”) by concatenating the Winogender and WinoBias coreference test sets. Overall, WinoMT contains 3,888 instances, and is equally balanced between male and female genders, as well as between stereotypical and nonstereotypical gender-role assignments (e.g., a female doctor versus a female nurse). Additional dataset statistics are presented in Table 1. We use WinoMT to estimate the gender-bias of an MT model, M, in target-language L by performing following steps (exemplified in Figure 1): (1) Translate all of the sentences in WinoMT into L using M, thus forming a bilingual corpus of English and the target language L. (2) Align between the source and target translations, using fast align (Dyer et al., 2013), trained on the automatic translations from from step (1). Winogender WinoBias WinoMT Male 240 1582 1826 Female 240 1586 1822 Neutral 240 0 240 Total 720 3168 3888 Table 1: The coreference test sets and resulting WinoMT corpus statistics (in number of instances). We then map the English entity annotated in the coreference datasets to its translation (e.g., align between “the doctor” and “el doctor” in Figure 1). (3) Finally, we extract the target-side entity’s gender using simple heuristics over languagespecific morphological analysis, which we perform using off-the-shelf tools for each target language, as discussed in the following section. This process extracts the translated genders, according to M, for all of the entities in WinoMT, which we can then evaluate against the gold annotations provided by the original English dataset. This process can introduce noise into our evaluation in steps (2) and (3), via wrong alignments or erroneous morphological analysis. In Section 3, we will present a human evaluation showing these errors are infrequent. 3 Evaluation In this section, we briefly describe the MT systems and the target languages we use, our main results, and their human validation. 3.1 Experimental Setup MT systems We test six widely used MT models, representing the state of the art in both commercial and academic research: (1) Google Translate,1 (2) Microsoft Translator,2 (3) Amazon Translate,3 (4) SYSTRAN,4 (5) the model of Ott et al. (2018), which recently achieved the best performance on English-to-French translation on the WMT’14 test set, and (6) the model of Edunov et al. (2018), the WMT’18 winner on English-toGerman translation. We query the online API for the first four commercial MT systems, while for the latter two academic models we use the pretrained models provided by the Fairseq toolkit.5 1https://translate.google.com 2https://www.bing.com/translator 3https://aws.amazon.com/translate 4http://www.systransoft.com 5https://github.com/pytorch/fairseq 1681 Google Translate Microsoft Translator Amazon Translate∗ SYSTRAN Acc ∆G ∆S Acc ∆G ∆S Acc ∆G ∆S Acc ∆G ∆S ES 53.1 23.4 21.3 47.3 36.8 23.2 59.4 15.4 22.3 45.6 46.3 15.0 FR 63.6 6.4 26.7 44.7 36.4 29.7 55.2 17.7 24.9 45.0 44.0 9.4 IT 39.6 32.9 21.5 39.8 39.8 17.0 42.4 27.8 18.5 38.9 47.5 9.4 RU 37.7 36.8 11.4 36.8 42.1 8.5 39.7 34.7 9.2 37.3 44.1 9.3 UK 38.4 43.6 10.8 41.3 46.9 11.8 – – – 28.9 22.4 12.9 HE 53.7 7.9 37.8 48.1 14.9 32.9 50.5 10.3 47.3 46.6 20.5 24.5 AR 48.5 43.7 16.1 47.3 48.3 13.4 49.8 38.5 19.0 47.0 49.4 5.3 DE 59.4 12.5 12.5 74.1 0.0 30.2 62.4 12.0 16.7 48.6 34.5 10.3 Table 2: Performance of commercial MT systems on the WinoMT corpus on all tested languages, categorized by their family: Spanish, French, Italian, Russian, Ukrainian, Hebrew, Arabic, and German. Acc indicates overall gender accuracy (% of instances the translation had the correct gender), ∆G denotes the difference in performance (F1 score) between masculine and feminine scores, and ∆S is the difference in performance (F1 score) between pro-stereotypical and anti-stereotypical gender role assignments (higher numbers in the two latter metrics indicate stronger biases). Numbers in bold indicate best accuracy for the language across MT systems (row), and underlined numbers indicate best accuracy for the MT system across languages (column). ∗Amazon Translate does not have a trained model for English to Ukrainian. Acc ∆G ∆S FR (Ott et al., 2018) 49.4 2.6 16.1 DE (Edunov et al., 2018) 52.5 7.3 8.4 Table 3: Performance of recent state-of-the-art academic translation models from English to French and German. Metrics are the same as those in Table 2. Target languages and morphological analysis We selected a set of eight languages with grammatical gender which exhibit a wide range of other linguistic properties (e.g., in terms of alphabet, word order, or grammar), while still allowing for highly accurate automatic morphological analysis. These languages belong to four different families: (1) Romance languages: Spanish, French, and Italian, all of which have gendered noun-determiner agreement and spaCy morphological analysis support (Honnibal and Montani, 2017). (2) Slavic languages (Cyrillic alphabet): Russian and Ukrainian, for which we use the morphological analyzer developed by Korobov (2015). (3) Semitic languages: Hebrew and Arabic, each with a unique alphabet. For Hebrew, we use the analyzer developed by Adler and Elhadad (2006), while gender inflection in Arabic can be easily identified via the ta marbuta character, which uniquely indicates feminine inflection. (4) Germanic languages: German, for which we use the morphological analyzer developed by Altinok (2018). 3.2 Results Our main findings are presented in Tables 2 and 3. For each tested MT system and target language we compute three metrics with respect to their ability to convey the correct gender in the target language. Ultimately, our analyses indicate that all tested MT systems are indeed gender biased. First, the overall system Accuracy is calculated by the percentage of instances in which the translation preserved the gender of the entity from the original English sentence. We find that most tested systems across eight tested languages perform quite poorly on this metric. The best performing model on each language often does not do much better than a random guess for the correct inflection. An exception to this rule is the translation accuracies on German, where three out of four systems acheive their best performance. This may be explained by German’s similarity to the English source language (Hawkins, 2015). In Table 2, ∆G denotes the difference in performance (F1 score) between male and female translations. Interestingly, all systems, except Microsoft Translator on German, perform significantly better on male roles, which may stem from these being more frequent in the training set. Perhaps most tellingly, ∆S measures the differ1682 ES FR IT RU UK HE AR DE 20 40 60 80 100 67 80 52 44 46 76 60 69 46 54 30 33 35 38 44 57 Accuracy (%) Stereotypical Non-Stereotypical Figure 2: Google Translate’s performance on gender translation on our tested languages. The performance on the stereotypical portion of WinoMT is consistently better than that on the non-stereotypical portion. The other MT systems we tested display similar trends. Original +Adj ∆ ES 53.1 63.5 +10.4 RU 37.7 48.9 +11.2 UK 38.4 42.9 +4.5 Table 4: Performance of Google Translate on Spanish, Russian, and Ukranian gender prediction accuracy (% correct) on the original WinoMT corpus, versus a modified version of the dataset where we add sterotypical gender adjectives (see Section 3.3). ence in performance (F1 score) between stereotypical and non-stereotypical gender role assignments, as defined by Zhao et al. (2018) who use statistics provided by the US Department of Labor.6 This metric shows that all tested systems have a significant and consistently better performance when presented with pro-stereotypical assignments (e.g., a female nurse), while their performance deteriorates when translating antistereotypical roles (e.g., a male receptionist). For instance, Figure 2 depicts Google Translate absolute accuracies on stereotypical and nonstereotypical gender roles across all tested languages. Other tested systems show similar trends. 3.3 Fighting Bias with Bias Finally, we tested whether we can affect the translations by automatically creating a version of WinoMT with the adjectives “handsome” and “pretty” prepended to male and female entities, respectively. For example, the sentence in Figure 1 will be converted to: “The pretty doctor asked the nurse to help her in the operation”. We are interested in evaluating whether this “corrects” the profession bias by mixing signals, e.g., while “doc6https://www.bls.gov/cps/cpsaat11.htm tor” biases towards a male translation, “pretty” tugs the translation towards a female inflection. Our results show that this improved performance in some languages, significantly reducing bias in Spanish, Russian, and Ukrainian (see Table 4). Admittedly, this is impractical as a general debiasing scheme, since it assumes oracle coreference resolution, yet it attests to the relation between coreference resolution and MT, and serves as a further indication of gender bias in MT. 3.4 Human Validation We estimate the accuracy of our gender bias evaluation method by randomly sampling 100 instances of all translation systems and target languages, annotating each sample by two target-language native speakers (resulting in 9,600 human annotations). Each instance conformed to a format similar to that used by our automatic gender detection algorithm: human annotators were asked to mark the gender of an entity within a given targetlanguage sentence. (e.g., see “el doctor” as highlighted in the Spanish sentence in Figure 1). By annotating at the sentence-level, we can account for both types of possible errors, i.e., alignment and gender extraction. We compare the sentence-level human annotations to the output of our automatic method, and find that the levels of agreement for all languages and systems were above 85%, with an average agreement on 87% of the annotations. In comparison, human inter-annotator agreement was 90%, due to noise introduced by several incoherent translations. Our errors occur when language-specific idiosyncrasies introduce ambiguity to the morphological analysis. For example, gender for certain words in Hebrew cannot be distinguished without 1683 Source [Target lang.] Predicted translation Phenomenon The janitor does not like the baker because she always messes up the kitchen. [ES] Al conserje no le gusta el panadero porque ella siempre desordena la cocina. Biased translation, giving “baker” a male inflection, with a mismatched pronoun reference. The janitor does not like the pretty baker because she always messes up the kitchen. [ES] Al conserje no le gusta la panadera bonita porque ella siempre desordena la cocina. Adding a stereotypically female adjective “fixes” the translation. The counselor asked the guard a few questions and praised her for the good work. [FR] Le conseiller a pos´e quelques questions `a la garde et l’a lou´ee pour le bon travail. French uses “garde” for both male and female guards, allowing for a more direct translation from English. Table 5: Examples of Google Translate’s output for different sentences in the WinoMT corpus. Words in blue, red, and orange indicate male, female and neutral entities, respectively. diacritics (e.g., the male and female versions of the word “baker” are spelled identically), and the contracted determiner in French and Italian (l’) is used for both masculine and feminine nouns. In addition, some languages have only male or female inflections for professions which were stereotypically associated with one of the genders, for example “sastre” (tailor) in Spanish or “soldat” (soldier) in French, which do not have female inflections. See Table 5 for detailed examples. 4 Discussion Related work This work is most related to several recent efforts which evaluate MT through the use of challenge sets. Similarly to our use WinoMT, these works evaluate MT systems (either manually or automatically) on test sets which are specially created to exhibit certain linguistic phenomena, thus going beyond the traditional BLEU metric (Papineni et al., 2002). These include challenge sets for language-specific idiosyncrasies (Isabelle et al., 2017), discourse phenomena (Bawden et al., 2018), pronoun translation (M¨uller et al., 2018; Webster et al., 2018), or coreference and multiword expressions (Burchardt et al., 2017). Limitations and future work While our work presents the first large-scale evaluation of gender bias in MT, it still suffers from certain limitations which could be addressed in follow up work. First, like some of the challenge sets discussed above, WinoMT is composed of synthetic English sourceside examples. On the one hand, this allows for a controlled experiment environment, while, on the other hand, this might introduce some artificial biases in our data and evaluation. Ideally, WinoMT could be augmented with natural “in the wild” instances, with many source languages, all annotated with ground truth entity gender. Second, similar to any medium size test set, it is clear that WinoMT serves only as a proxy estimation for the phenomenon of gender bias, and would probably be easy to overfit. A larger annotated corpus can perhaps provide a better signal for training. Finally, even though in Section 3.3 we show a very rudimentary debiasing scheme which relies on oracle coreference system, it is clear that this is not applicable in a real-world scenario. While recent research has shown that getting rid of such biases may prove to be very challenging (Elazar and Goldberg, 2018; Gonen and Goldberg, 2019), we hope that this work will serve as a first step for developing more gender-balanced MT models. 5 Conclusions We presented the first large-scale multilingual quantitative evidence for gender bias in MT, showing that on eight diverse target languages, all four tested popular commercial systems and two recent state-of-the-art academic MT models are significantly prone to translate based on gender stereotypes rather than more meaningful context. Our data and code are publicly available at https://github.com/ gabrielStanovsky/mt_gender. Acknowledgments We would like to thank Mark Yatskar, Iz Beltagy, Tim Dettmers, Ronan Le Bras, Kyle Richardson, Ariel and Claudia Stanovsky, and Paola Virga for many insightful discussions about the role gender plays in the languages evaluated in this work, as well as the reviewers for their helpful comments. 1684 References Meni Adler and Michael Elhadad. 2006. An unsupervised morpheme-based HMM for Hebrew morphological disambiguation. In ACL. Duygu Altinok. 2018. DEMorphy, German language morphological analyzer. CoRR, abs/1803.00902. David Alvarez-Melis and Tommi S. Jaakkola. 2017. A causal framework for explaining the predictions of black-box sequence-to-sequence models. In EMNLP. Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2018. Evaluating discourse phenomena in neural machine translation. In NAACLHLT. Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Tauman Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In NIPS. Aljoscha Burchardt, Vivien Macketanz, Jon Dehdari, Georg Heigold, Jan-Thorsten Peter, and Philip Williams. 2017. A linguistic evaluation of rule-based, phrase-based, and neural mt engines. The Prague Bulletin of Mathematical Linguistics, 108(1):159–170. Greville G Corbett. 2007. Gender and noun classes. Colette G Craig. 1986. Noun Classes and Categorization: Proceedings of a Symposium on Categorization and Noun Classification, volume 7. John Benjamins Publishing Company. Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In HLT-NAACL. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. arXiv preprint arXiv:1808.09381. Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In EMNLP. Joel Escud´e Font and Marta R. Costa-Juss`a. 2019. Equalizing gender biases in neural machine translation with word embeddings techniques. CoRR, abs/1901.03116. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. HLT-NAACL. John A Hawkins. 2015. A Comparative Typology of English and German: Unifying the Contrasts. Routledge. Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear. Pierre Isabelle, Colin Cherry, and George F. Foster. 2017. A challenge set approach to evaluating machine translation. In EMNLP. Mikhail Korobov. 2015. Morphological analyzer and generator for Russian and Ukrainian languages. In Mikhail Yu. Khachay, Natalia Konstantinova, Alexander Panchenko, Dmitry I. Ignatov, and Valeri G. Labunets, editors, Analysis of Images, Social Networks and Texts, volume 542 of Communications in Computer and Information Science, pages 320– 332. Springer International Publishing. James Kuczmarski. 2018. Reducing gender bias in google translate. Hector J. Levesque. 2011. The Winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning. Angelica Mucchi-Faina. 2005. Visible or influential? language reforms and gender (in) equality. Social Science Information, 44(1):189–215. Mathias M¨uller, Annette Rios, Elena Voita, and Rico Sennrich. 2018. A large-scale test set for the evaluation of context-aware pronoun translation in neural machine translation. CoRR, abs/1810.02268. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. arXiv preprint arXiv:1806.00187. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. Rachel Rudinger, Chandler May, and Benjamin Van Durme. 2017. Social bias in elicited natural language inferences. In EthNLP@EACL. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In NAACL-HLT. Kellie Webster, Marta Recasens, Vera Axelrod, and Jason Baldridge. 2018. Mind the gap: A balanced corpus of gendered ambiguous pronouns. Transactions of the Association for Computational Linguistics, 6:605–617. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In EMNLP. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In NAACL-HLT.
2019
164
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1685–1695 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1685 LSTMEmbed: Learning Word and Sense Representations from a Large Semantically Annotated Corpus with Long Short-Term Memories Ignacio Iacobacci1,2∗and Roberto Navigli2 1Huawei Noah’s Ark Lab, London, United Kingdom 2Department of Computer Science, Sapienza University of Rome, Italy [email protected] {iacobacci,navigli}@di.uniroma1.it Abstract While word embeddings are now a de facto standard representation of words in most NLP tasks, recently the attention has been shifting towards vector representations which capture the different meanings, i.e., senses, of words. In this paper we explore the capabilities of a bidirectional LSTM model to learn representations of word senses from semantically annotated corpora. We show that the utilization of an architecture that is aware of word order, like an LSTM, enables us to create better representations. We assess our proposed model on various standard benchmarks for evaluating semantic representations, reaching state-of-the-art performance on the SemEval2014 word-to-sense similarity task. We release the code and the resulting word and sense embeddings at http://lcl.uniroma1. it/LSTMEmbed. 1 Introduction Natural Language is inherently ambiguous, for reasons of communicative efficiency (Piantadosi et al., 2012). For us humans, ambiguity is not a problem, since we use common knowledge to fill in the gaps and understand each other. Therefore, a computational model suited to understanding natural language and working side by side with humans should be capable of dealing with ambiguity to a certain extent (Navigli, 2018). A necessary step towards creating such computer systems is to build formal representations of words and their meanings, either in the form of large repositories of knowledge, e.g., semantic networks, or as vectors in a geometric space (Navigli and Martelli, 2019). In fact, Representation Learning (Bengio et al., 2013) has been a major research area in NLP over ∗Ignacio Iacobacci’s work was mainly done at the Sapienza University of Rome. the years, and latent vector-based representations, called embeddings, seem to be a good candidate for coping with ambiguity. Embeddings encode lexical and semantic items in a low-dimensional continuous space. These vector representations capture useful syntactic and semantic information of words and senses, such as regularities in the natural language, and relationships between them, in the form of relation-specific vector offsets. Recent approaches, such as word2vec (Mikolov et al., 2013), and GloVe (Pennington et al., 2014), are capable of learning efficient word embeddings from large unannotated corpora. But while word embeddings have paved the way for improvements in numerous NLP tasks (Goldberg, 2017), they still conflate the various meanings of each word and let its predominant sense prevail over all others in the resulting representation. Instead, when these embedding learning approaches are applied to senseannotated data, they are able to produce embeddings for word senses (Iacobacci et al., 2015). A strand of work aimed at tackling the lexical polysemy issue has proposed the creation of sense embeddings, i.e. embeddings which separate the various senses of each word in the vocabulary (Huang et al., 2012; Chen et al., 2014; Iacobacci et al., 2015; Flekova and Gurevych, 2016; Pilehvar and Collier, 2016; Mancini et al., 2017, among others). One of the weaknesses of these approaches, however, is that they do not take word ordering into account during the learning process. On the other hand, word-based approaches based on RNNs that consider sequence information have been presented, but they are not competitive in terms of speed or quality of the embeddings (Mikolov et al., 2010; Mikolov and Zweig, 2012; Mesnil et al., 2013). For example, in Figure 1 we show an excerpt of a t-SNE (Maaten and Hinton, 2008) projection of word and sense embeddings in the literature: 1686 Figure 1: An example joint space where word vectors (squares) and sense vectors (dots and crosses) appear separated. Figure 2: A shared space of words (squares) distributed across the space and two sense clusters (dots and crosses). as can be seen, first, the ambiguous word bank is located close to words which co-occur with it (squares in the Figure), and, second, the closest senses of bank (dots for the financial institution meaning and crosses for its geographical meaning) appear clustered in two separated regions without a clear correlation with (potentially ambiguous) words which are relevant to them. A more accurate representation would be to have word vectors distributed across all the space with defined clusters for each set of vectors related to each sense of a target word (Figure 2). Recently, the much celebrated Long-Short Term Memory (LSTM) neural network model has emerged as a successful model to learn representations of sequences, thus providing an ideal solution for many Natural Language Processing tasks whose input is sequence-based, e.g., sentences and phrases (Hill et al., 2016; Melamud et al., 2016; Peters et al., 2018). However, to date LSTMs have not been applied to the effective creation of sense embeddings linked to an explicit inventory. In this paper, we explore the capabilities of the architecture of LSTMs using sense-labeled corpora for learning semantic representations of words and senses. We present four main contributions: • We introduce LSTMEmbed, an RNN model based on a bidirectional LSTM for learning word and sense embeddings in the same semantic space, which – in contrast to the most popular approaches to the task – takes word ordering into account. • We present an innovative idea for taking advantage of pretrained embeddings by using them as an objective during training. • We show that LSTM-based models are suitable for learning not only contextual information, as is usually done, but also representations of individual words and senses. • By linking our representations to a knowledge resource, we take advantage of the preexisting semantic information. 2 Embeddings for words and senses Machine-interpretable representations of the meanings of words are key for a number of NLP tasks, and therefore obtaining good representations is an important research goal in the field, as shown by the surge of recent work on this topic. 2.1 Word Embeddings In recent years, we have seen an exponential growth in the popularity of word embeddings. Models for learning embeddings, typically based on neural networks, represent individual words as low-dimensional vectors. Mikolov et al. (2013, word2vec) showed that word representations learned with a neural network trained on raw text geometrically encode highly latent relationships. The canonical example is the vector resulting from king −man + woman found to be very close to the induced vector of queen. GloVe (Pennington et al., 2014), an alternative approach trained on aggregated global word-word co-occurrences, obtained similar results. While these embeddings are surprisingly good for monosemous words, they fail to represent the non-dominant senses of words properly. For instance, the representations of bar 1687 and pub should be similar, as well as those of bar and stick, but having similar representations for pub and stick is undesirable. Several approaches were proposed to mitigate this issue: Yu and Dredze (2014) presented an alternative way to train word embeddings by using, in addition to common features, words having some relation in a semantic resource, like PPDB (Ganitkevitch et al., 2013) or WordNet (Miller, 1995). Faruqui et al. (2015) presented a technique applicable to pre-processed embeddings, in which vectors are updated (“retrofitted”) in order to make them more similar to those which share a word type and less similar to those which do not. The word types were extracted from diverse semantic resources such as PPDB, WordNet and FrameNet (Baker et al., 1998). Melamud et al. (2016) introduced context2vec, a model based on a bidirectional LSTM for learning sentence and word embeddings. This model uses large raw text corpora to train a neural model that embeds entire sentential contexts and target words in the same lowdimensional space. Finally, Press and Wolf (2017) introduced a model, based on word2vec, where the embeddings are extracted from the output topmost weight matrix, instead of the input one, showing that those representations are also valid word embeddings. 2.2 Sense Embeddings In contrast to the above approaches, each of which aims to learn representations of lexical items, sense embeddings represent individual word senses as separate vectors. One of the main approaches for learning sense embeddings is the so-called knowledge-based approach, which relies on a predefined sense inventory such as WordNet, BabelNet1 (Navigli and Ponzetto, 2012) or Freebase2. SensEmbed3 (Iacobacci et al., 2015) uses Babelfy4, a state-of-the-art tool for Word Sense Disambiguation and Entity Linking, to build a sense-annotated corpus which, in turn, is used to train a vector space model for word senses with word2vec. SensEmbed exploits the structured knowledge of BabelNet’s sense inventory along with the distributional information gathered from text corpora. Since this approach is based on word2vec, the model suffers from the lack of word 1https://babelnet.org 2http://developers.google.com/freebase 3http://lcl.uniroma1.it/sensembed/ 4http://babelfy.org ordering while learning embeddings. An alternative way of learning sense embeddings is to start from a set of pretrained word embeddings and split the vectors into their respective senses. This idea was implemented by Rothe and Sch¨utze (2015) in AutoExtend, a system which learns embeddings for lexemes, senses and synsets from WordNet in a shared space. The synset/lexeme embeddings live in the same vector space as the word embeddings, given the constraint that words are sums of their lexemes and synsets are sums of their lexemes. AutoExtend is based on an auto-encoder, a neural network that mimics the input and output vectors. However, Mancini et al. (2017) pointed out that, by constraining the representations of senses, we cannot learn much about the relation between words and senses. They introduced SW2V, a model which extends word2vec to learn embeddings for both words and senses in the same vector space as an emerging feature, rather than via constraints on both representations. The model was built by exploiting large corpora and knowledge obtained from WordNet and BabelNet. Their basic idea was to extend the CBOW architecture of word2vec to represent both words and senses as different inputs and train the model in order to predict the word and its sense in the middle. Nevertheless, being based on word2vec, SW2V also lacks a notion of word ordering. Other approaches in the literature avoid the use of a predefined sense inventory. The vectors learned by such approaches are identified as multi-prototype embeddings rather than senses, due to the fact that these vectors are only identified as different from one another, while there is no clear identification of their inherent sense. Several approaches have used this idea: Huang et al. (2012) introduced a model which learned multi vectors per word by clustering word context representations. Neelakantan et al. (2014) extended word2vec and included a module which induced new sense vectors if the context in which a word occurred was too different from the previously seen contexts for the same word. A similar approach was introduced by Li and Jurafsky (2015), which used a Chinese Restaurant Process as a way to induce new senses. Finally, Peters et al. (2018) presented ELMo, a word-in-context representation model based on a deep bidirectional language model. In contrast to the other related approaches, ELMo does not have a token dictionary, but rather 1688 each token is represented by three vectors, two of which are contextual. These models are, in general, difficult to evaluate, due to their lack of linkage to a lexical-semantic resource. In marked contrast, LSTMEmbed, the neural architecture we present in this paper, aims to learn individual representations for word senses, linked to a multilingual lexical-semantic resource like BabelNet, while at the same time handling word ordering, and using pretrained embeddings as objective. 3 LSTMEmbed Many approaches for learning embeddings are based on feed-forward neural networks (Section 2). However, recently LSTMs have gained popularity in the NLP community as a new de facto standard model to process natural language, by virtue of their context and word-order awareness. In this section we introduce LSTMEmbed, a novel method to learn word and sense embeddings jointly and which is based on the LSTM architecture. 3.1 Model Overview At the core of LSTMEmbed is a bidirectional Long Short Term Memory (BiLSTM), a kind of recurrent neural network (RNN) which uses a set of gates especially designed for handling long-range dependencies. The bidirectional LSTM (BiLSTM) is a variant of the original LSTM (Hochreiter and Schmidhuber, 1997) that is particularly suited for temporal problems when access to the complete context is needed. In our case, we use an architecture similar to Kawakami and Dyer (2015), K˚ageb¨ack and Salomonsson (2016) and Melamud et al. (2016), where the state at each time step in the BiLSTM consists of the states of two LSTMs, centered in a particular timestep, accepting the input from previous timesteps in one LSTM, and the future timesteps in another LSTM. This is particularly suitable when the output corresponds to the analyzed timestep and not to the whole context. Figure 3 illustrates our model architecture. In marked contrast to the other LSTM-based approaches in the literature, we use sensetagged text to provide input contexts of the kind si−W , . . . , si−1 (the preceding context) and si+1, . . . , si+W (the posterior context), where sj (j ∈[i−W, . . . , i+W]) is either a word or a sense Figure 3: The LSTMEmbed architecture. tag from an existing inventory (see Section 4.1 for details). Each token is represented by its corresponding embedding vector v(sj) ∈Rn, given by a shared look-up table, which enables representations to be learned taking into account the contextual information on both sides. Next, the BiLSTM reads both sequences, i.e., the preceding context, from left to right, and the posterior context, from right to left: ol = lstml(v(si−W ), ..., v(si−1)) or = lstmr(v(si+1), ..., v(si+W )) (1) The model has one extra layer. The concatenation of the output of both LSTMs is projected linearly via a dense layer: outLSTMEmbed = Wo(ol ⊕or) (2) where Wo ∈R2m×m is the weights matrix of the dense layer with m being the dimension of the LSTM. Then, the model compares outLSTMEmbed with emb(si), where emb(si) is a pretrained embedding vector of the target token (see Section 4.1 for an illustration of the pretrained embeddings that we use in our experiments), and, depending on the annotation and the pretrained set of embeddings used, this could be either a word, or a sense. At training time, the weights of the network are modified in order to maximize the similarity between outLSTMEmbed and emb(si). The loss function 1689 is calculated in terms of cosine similarity: loss = 1 −S(⃗v1, ⃗v2) = 1 − ⃗v1 · ⃗v2 ∥⃗v1∥∥⃗v2∥ (3) Once the training is over, we obtain latent semantic representations of words and senses jointly in the same vector space from the look-up table, i.e., the embedding matrix between the input and the LSTM, with the embedding vector of an item s given by v(s). In comparison to a standard BiLSTM, the novelties of LSTMEmbed can be summarized as follows: • Using a sense-annotated corpus which includes both words and senses for learning the embeddings. • Learning representations of both words and senses, extracted from a single look-up table, shared between both left and right LSTMs. • A new learning method, which uses a set of pretrained embeddings as the objective, which enables us to learn embeddings for a large vocabulary. 4 Evaluation We now present an experimental evaluation of the representations learned with LSTMEmbed. We first provide implementation details (Section 4.1), and then, to show the effectiveness of our model on a broad range of tasks, report on two sets of experiments: those involving sense-level tasks (Section 4.2) and those concerned with the word level (Section 4.3). 4.1 Implementation Details Training data. We chose BabelNet (Navigli and Ponzetto, 2012) as our sense inventory.5 BabelNet is a large multilingual encyclopedic dictionary and semantic network, comprising approximately 16 million entries for concepts and named entities linked by semantic relations. As training corpus we used the English portion of BabelWiki,6 a multilingual corpus comprising the English Wikipedia (Scozzafava et al., 2015). The corpus was automatically annotated with named entities and concepts using Babelfy (Moro et al., 2014), a state-ofthe-art disambiguation and entity linking system, 5We used version 4.0 as available from the website. 6http://lcl.uniroma1.it/ babelfied-wikipedia/ based on the BabelNet semantic network. The English section of BabelWiki contains 3 billion tokens and around 3 million unique tokens. Learning embeddings. LSTMEmbed was built with the Keras7 library using Theano8 as backend. We trained our models with an Nvidia Titan X Pascal GPU. We set the dimensionality of the look-up table to 200 due to memory constraints. We discarded the 1,000 most frequent tokens and set the batch size to 2048. The training was performed for one epoch. As optimizer function we used Adaptive Moment Estimation or Adam (Kingma and Ba, 2014). As regards the objective embeddings emb(si) used for training, we chose 400-dimension sense embeddings trained using word2vec’s SkipGram architecture with negative sampling on the BabelWiki corpus and recommended parameters for the SkipGram architecture: window size of 10, negative sampling set on 10, sub-sampling of frequent words set to 103. 4.2 Sense-based Evaluation Our first set of experiments was aimed at showing the impact of our joint word and sense model in tasks where semantic, and not just lexical, relatedness is needed. We analyzed two tasks, namely Cross-Level Semantic Similarity and Most Frequent Sense Induction. Comparison systems. We compared the performance of LSTMEmbed against alternative approaches to sense embeddings: SensEmbed (Iacobacci et al., 2015), which obtained semantic representations by applying word2vec to the English Wikipedia disambiguated with Babelfy; Nasari (Camacho-Collados et al., 2015), a technique for rich semantic representation of arbitrary concepts present in WordNet and Wikipedia pages; AutoExtend (Rothe and Sch¨utze, 2015) which, starting from the word2vec word embeddings learned from GoogleNews9, infers the representation of senses and synsets from WordNet; DeConf, an approach introduced by Pilehvar and Collier (2016) that decomposes a given word representation into its constituent sense representations by exploiting WordNet. 7https://keras.io 8http://deeplearning.net/software/ theano/index.html 9https://code.google.com/archive/p/ word2vec/ 1690 Model Pearson Spearman MeerkatMafia 0.389* 0.380 SemantiKLU 0.314 0.327 SimCompass 0.356 0.344 AutoExtend 0.362 0.364 SensEmbed 0.316 0.333 SW2V 0.311 0.308 Nasari 0.244 0.220 DeConf 0.349 0.356 LSTMEmbed 0.380* 0.400 Table 1: Pearson and Spearman correlations on the CLSS word-to-sense similarity task. * Not statistically significant difference (χ2, p < 0.05). Experiment 1: Cross-Level Semantic Similarity. To best evaluate the ability of embeddings to discriminate between the various senses of a word, we opted for the SemEval-2014 task on Cross-Level Semantic Similarity (Jurgens et al., 2014, CLSS), which includes word-to-sense similarity as one of its sub-tasks. The CLSS word-tosense similarity dataset comprises 500 instances of words, each paired with a short list of candidate senses from WordNet with human ratings for their word-sense relatedness. To compute the word-tosense similarity we used our shared vector space of words and senses, and calculated the similarity using the cosine distance. We included not only alternative sense-based representations but also the best performing approaches on this task: MeerkatMafia (Kashyap et al., 2014), which uses Latent Semantic Analysis (Deerwester et al., 1990) and WordNet glosses to get word-sense similarity measurements; SemantiKLU (Proisl et al., 2014), an approach based on a distributional semantic model trained on a large Web corpus from different sources; SimCompass (Banea et al., 2014), which combines word2vec with information from WordNet. The results are given as Pearson and Spearman correlation scores in Table 1. LSTMEmbed achieves the state of the art by surpassing, in terms of Spearman correlation, alternative sense embedding approaches, as well as the best systems built specifically for the CLSS word-to-sense similarity task. In terms of Pearson, LSTMEmbed is on a par with the current state of the art, i.e., MeerkatMafia. Model P@1 P@3 P@5 AutoExtend 22.8 52.0 56.6 SensEmbed 38.4 56.1 63.0 SW2V 39.7 60.3 67.5 Nasari 27.4 40.2 44.6 DeConf 30.1 55.8 64.3 LSTMEmbed 39.0 59.2 66.0 Table 2: Precision on the MFS task (percentages). Experiment 2: Most Frequent Sense Induction. In a second experiment, we employed our representations to induce the most frequent sense (MFS) of the input words, which is known to be a hard-to-beat baseline for Word Sense Disambiguation systems (Navigli, 2009). The MFS is typically computed by counting the word sense pairs in an annotated corpus such as SemCor (Miller et al., 1993). To induce a MFS using sense embeddings, we identified – among all the sense embeddings of an ambiguous word – the sense which was closest to the word in terms of cosine similarity in the vector space. We evaluated all the sense embedding approaches on this task by comparing the induced most frequent senses against the MFS computed for all those words in SemCor which have a minimum number of 5 sense annotations (3731 words in total, that we release with the paper), so as to exclude words with insufficient gold-standard data for the estimates. We carried out our evaluation by calculating precision@K (K ∈{1, 3, 5}). Table 2 shows that, across all the models, SW2V performs the best, leaving LSTMEmbed as the best runnerup approach. 4.3 Word-based Evaluation While our primary goal was to show the effectiveness of LSTMEmbed on tasks in need of sense information, we also carried out a second set of experiments focused on word-based evaluations with the objective of demonstrating the ability of our joint word and sense embedding model to tackle tasks traditionally approached with wordbased models. Experiment 3: Synonym Recognition. We first experimented with synonym recognition: given a target word and a set of alternative words, the objective of this task was to select the member from 1691 Model Accuracy TOEFL-80 ESL-50 word2vec 87.00 62.00 GloVe 88.75 60.00 Jauhar et al. (2015) 80.00 73.33* MSSG 78.26 57.14 Li and Jurafsky (2015) 82.61 50.00 MUSE 88.41 64.29 LSTMEmbed 92.50 72.00* Table 3: Synonym Recognition: accuracy (percentages). * Not statistically significant difference (χ2, p < 0.05). the set which was most similar in meaning to the target word. The most likely synonym for a word w given the set of candidates Aw is calculated as: Syn (w, Aw) = arg max v∈Aw Sim (w, v) (4) where Sim is the pairwise word similarity: Sim (w1, w2) = max s1∈Sw1 s2∈Sw2 cosine (⃗s1, ⃗s2) (5) where Swi is the set of words and senses associated with the word wi. We consider all the inflected forms of every word, with and without all its possible senses. In order to evaluate the performance of LSTMEmbed on this task, we carried out experiments on two datasets. The first one, introduced by Landauer and Dumais (1997), is extracted directly from the synonym questions of the TOEFL (Test of English as a Foreign Language) questionnaire. The test comprises 80 multiple-choice synonym questions with four choices per question. The second one, introduced by Turney (2001), provides a set of questions extracted from the synonym questions of the ESL test (English as a Second Language). Similarly to TOEFL, it comprises 50 multiple-choice synonym questions with four choices per question. Several related efforts used this kind of metric to evaluate their representations. We compare our approach with the following: • Multi-Sense Skip-gram (Neelakantan et al., 2014, MGGS), an extension of the Skip-gram model of word2vec capable of learning multiple embeddings for a single word. The model makes no assumption about the number of prototypes. • Li and Jurafsky (2015), a multi-sense embeddings model based on the Chinese Restaurant Process. • Jauhar et al. (2015), a multi-sense approach based on expectation-maximization style algorithms for inferring word sense choices in the training corpus and learning sense embeddings while incorporating ontological sources of information. • Modularizing Unsupervised Sense Embeddings (Lee and Chen, 2017, MUSE), an unsupervised approach that introduces a modularized framework to create sense-level representation learned with linear-time sense selection. In addition, we included in the comparison two off-the-shelf popular word embedding models: GoogleNews, a set of word embeddings trained with word2vec, from a corpus of newspaper articles, and Glove.6B10, a set of word embeddings trained on a merge of 2014 English Wikipedia dump and the corpus from Gigaword 5, for a total of 6 billion tokens. In Table 3 we report the performance of LSTMEmbed together with the alternative approaches (the latter obtained from the respective publications). We can see that, on the TOEFL task, LSTMEmbed outperforms all other approaches, including the word-based models. On the ESL task, LSTMEmbed is the runner-up approach across systems and only by a small margin. The performance of the remaining models is considerably below ours. Experiment 4: Outlier detection. Our second word-based evaluation was focused on outlier detection, a task intended to test the capability of the learned embeddings to create semantic clusters, that is, to test the assumption that the representation of related words should be closer than the representations of unrelated ones. We tested our model on the 8-8-8 dataset introduced by Camacho-Collados and Navigli (2016), containing eight clusters, each with eight words and eight possible outliers. In our case, we extended the 10https://nlp.stanford.edu/projects/ glove/ 1692 Model Corpus Sense 8-8-8 OPP Acc. word2vec* UMBC 92.6 73.4 Wikipedia 93.8 70.3 GoogleNews 94.7 70.3 GloVe* UMBC 81.6 40.6 Wikipedia 91.8 56.3 AutoExtend GoogleNews ✓ 82.8 37.5 SensEmbed Wikipedia ✓ 98.0 95.3 SW2V Wikipedia ✓ 48.4 37.5 Nasari Wikipedia ✓ 94.0 76.3 DeConf GoogleNews ✓ 93.8 62.5 LSTMEmbed Wikipedia ✓ 96.1 78.1 Table 4: Outlier detection task (* reported in Camacho-Collados and Navigli (2016)). similarity function used in the evaluation to consider both the words in the dataset and their senses, similarly to what we had done in the synonym recognition task (cf. Equation 5). We can see from Table 4 that LSTMEmbed ranks second below SensEmbed in terms of both measures defined in the task (accuracy, and outlier position percentage, which considers the position of the outlier according to the proximity of the semantic cluster), with both approaches outperforming all other word-based and sense-based approaches. 5 Analysis The objective embedding emb we used in our work uses pretrained sense embeddings obtained from word2vec trained on BabelWiki, as explained in Section 4.1. Our assumption was that training with richer and meaningful objective embeddings would enhance the representation delivered by our model in comparison to using wordbased models. We put this hypothesis to the test by comparing the performance of LSTMEmbed equipped with five sets of pretrained embeddings on a word similarity task. We used the WordSim353 (Finkelstein et al., 2002) dataset, which comprises 353 word pairs annotated by human subjects with a pairwise relatedness score. We computed the performance of LSTMEmbed with the different pretrained embeddings in terms of Spearman correlation between the cosine similarities of the Model Objective Dim. WS353 word2vec 0.488 GloVe 0.557 LSTMEmbed random (baseline) 50 0.161 word2vec 50 0.573 word2vec + retro 50 0.569 GoogleNews 300 0.574 GloVe.6B 300 0.577 SensEmbed 400 0.612 Table 5: Spearman correlation on the Word Similarity Task. LSTMEmbed word vectors and the WordSim-353 scores. The first set of pretrained embeddings is a 50-dimension word space model, trained with word2vec Skip-gram with the default configuration. The second set consists of the same vectors, retrofitted with PPDB using the default configuration. The third is the GoogleNews set of pretrained embeddings. The fourth is the GloVe.6B word space model. Finally, we tested our model with the pretrained embeddings of SensEmbed. As a baseline we included a set of normalized random vectors. As is shown in Table 5, using richer pretrained embeddings improves the resulting representations given by our model. All the representations obtain better results compared to word2vec and GloVe trained on the same corpus, with the sense embeddings from SensEmbed, a priori the richest set of pretrained embeddings, attaining the best performance. 6 Conclusions We presented LSTMEmbed, a new model based on a bidirectional LSTM for learning embeddings of words and senses jointly, and which is able to learn semantic representations on a par with, or better than, state-of-the-art approaches. We draw three main findings. Firstly, we have shown that our semantic representations are capable to properly reflect the similarity between word and sense representations, showing state-of-the-art performance in the sense-aware tasks of word-to-sense similarity and most frequent sense induction. Secondly, our approach is also able to attain high performance in standard word-based semantic evaluations, namely, synonym recognition and outlier 1693 detection. Finally, the introduction of an output layer which predicts pretrained embeddings enables us to use larger vocabularies instead of using the slower softmax. We release the word and sense embeddings at the following URL: http: //lcl.uniroma1.it/LSTMEmbed. Our model shows potential for further applications. We did, in fact, explore alternative configurations, for instance, using several layers or replacing the LSTMs with Gated Recurrent Units (Cho et al., 2014) or the Transformer architecture (Vaswani et al., 2017). Trying more complex networks is also within our scope and is left as future work. Acknowledgments The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 under the European Union’s Horizon 2020 research and innovation programme. The authors gratefully acknowledge the support of NVIDIA Corporation Hardware Grant. References Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, pages 86–90. Carmen Banea, Di Chen, Rada Mihalcea, Claire Cardie, and Janyce Wiebe. 2014. SimCompass: Using Deep Learning Word Embeddings to Assess Cross-level Similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 560–565, Dublin, Ireland. Y. Bengio, A. Courville, and P. Vincent. 2013. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828. Jos´e Camacho-Collados and Roberto Navigli. 2016. Find the word that does not belong: A framework for an intrinsic evaluation of word vector representations. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 43–50, Berlin, Germany. Jos´e Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. NASARI: a novel approach to a semantically-aware representation of items. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 567–577, Denver, Colorado. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1025–1035, Doha, Qatar. Kyunghyun Cho, Bart van Merri¨enboer, C¸ alar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734, Doha, Qatar. Scott C. Deerwester, Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science, 41(6):391–407. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1606–1615, Denver, Colorado. Lev Finkelstein, Gabrilovich Evgeniy, Matias Yossi, Rivlin Ehud, Solan Zach, Wolfman Gadi, and Ruppin Eytan. 2002. Placing search in context: The concept revisited. ACM Transactions on Information Systems, 20(1):116–131. Lucie Flekova and Iryna Gurevych. 2016. Supersense embeddings: A unified model for supersense interpretation, prediction, and utilization. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2029–2041. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 758–764, Atlanta, Georgia. Yoav Goldberg. 2017. Neural Network Methods in Natural Language Processing. Morgan & Claypool Publishers. Felix Hill, KyungHyun Cho, Anna Korhonen, and Yoshua Bengio. 2016. Learning to understand phrases by embedding the dictionary. Transactions of the Association for Computational Linguistics, Volume 4, pages 17–30. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. 1694 Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 873–882, Jeju Island, South Korea. Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. SensEmbed: Learning sense embeddings for word and relational similarity. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 95–105, Beijing, China. Sujay Kumar Jauhar, Chris Dyer, and Eduard Hovy. 2015. Ontologically Grounded Multi-sense Representation Learning for Semantic Vector Space Models. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 683–693, Denver, Colorado. David Jurgens, Mohammad Taher Pilehvar, and Roberto Navigli. 2014. SemEval-2014 Task 3: Cross-Level Semantic Similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 17–26, Dublin, Ireland. Mikael K˚ageb¨ack and Hans Salomonsson. 2016. Word Sense Disambiguation using a Bidirectional LSTM. In Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon (CogALex - V), Osaka, Japan. The COLING 2016 Organizing Committee. Abhay Kashyap, Lushan Han, Roberto Yus, Jennifer Sleeman, Taneeya Satyapanich, Sunil Gandhi, and Tim Finin. 2014. Meerkat Mafia: Multilingual and Cross-Level Semantic Textual Similarity Systems. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 416– 423, Dublin, Ireland. Kazuya Kawakami and Chris Dyer. 2015. Learning to represent words in context with multilingual supervision. CoRR, abs/1511.04623. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Thomas K. Landauer and Susan T. Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological review, 104(2):211. Guang-He Lee and Yun-Nung Chen. 2017. MUSE: Modularizing Unsupervised Sense Embeddings. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 327–337, Copenhagen, Denmark. Jiwei Li and Dan Jurafsky. 2015. Do multi-sense embeddings improve natural language understanding? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1722–1732, Lisbon, Portugal. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. Massimiliano Mancini, Jose Camacho-Collados, Ignacio Iacobacci, and Roberto Navigli. 2017. Embedding words and senses together via joint knowledgeenhanced training. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 100–111. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional LSTM. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, Berlin, Germany. Gr´egoire Mesnil, Xiaodong He, Li Deng, and Yoshua Bengio. 2013. Investigation of RNN architectures and learning methods for spoken language understanding. In INTERSPEECH-2013, pages 3771– 3775, Lyon, France. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH-2010, Makuhari, Japan. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168. Tomas Mikolov and Geoffrey Zweig. 2012. Context dependent recurrent neural network language model. In 2012 IEEE Spoken Language Technology Workshop (SLT), pages 234–239, Miami, Florida. IEEE. George A. Miller. 1995. Wordnet: A lexical database for english. Communications of the ACM, 38(11):39–41. George A. Miller, Claudia Leacock, Randee Tengi, and Ross Bunker. 1993. A semantic concordance. In Proceedings of the Workshop on Human Language Technology, pages 21–24, Plainsboro, New Jersey. Andrea Moro, Alessandro Raganato, and Roberto Navigli. 2014. Entity linking meets word sense disambiguation: a unified approach. Transactions of the Association for Computational Linguistics, Volume 2, pages 231–244. Roberto Navigli. 2009. Word sense disambiguation: a survey. ACM Computing Surveys, 41(2):1–69. Roberto Navigli. 2018. Natural language understanding: Instructions for (present and future) use. In Proc. of IJCAI, pages 5697–5702. 1695 Roberto Navigli and Federico Martelli. 2019. An Overview of Word and Sense Similarity. Natural Language Engineering, 25(6). Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The Automatic Construction, Evaluation and Application of a Wide-Coverage Multilingual Semantic Network. AI, 193:217–250. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient nonparametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1059–1069, Doha, Qatar. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Steven T. Piantadosi, Harry Tily, and Edward Gibson. 2012. The communicative function of ambiguity in language. Cognition, 122(3):280 – 291. Mohammad Taher Pilehvar and Nigel Collier. 2016. De-conflated semantic representations. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1680–1690, Austin, Texas. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 157–163, Valencia, Spain. Thomas Proisl, Stefan Evert, Paul Greiner, and Besim Kabashi. 2014. SemantiKLUE: Robust Semantic Similarity at Multiple Levels Using Maximum Weight Matching. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 532–540, Dublin, Ireland. Sascha Rothe and Hinrich Sch¨utze. 2015. Autoextend: Extending word embeddings to embeddings for synsets and lexemes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1793–1803, Beijing, China. Federico Scozzafava, Alessandro Raganato, Andrea Moro, and Roberto Navigli. 2015. Automatic identification and disambiguation of concepts and named entities in the multilingual wikipedia. In AIxIA, pages 357–366. Peter D. Turney. 2001. Mining the Web for Synonyms: PMI-IR Versus LSA on TOEFL. In Proceedings of the Twelth European Conference on Machine Learning (ECML), pages 491–502, Freiburg, Germany. Springer. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Curran Associates, Inc. Mo Yu and Mark Dredze. 2014. Improving Lexical Embeddings with Semantic Knowledge. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 545–550, Baltimore, Maryland.
2019
165
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1696–1705 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1696 Understanding Undesirable Word Embedding Associations Kawin Ethayarajh, David Duvenaud†, Graeme Hirst University of Toronto †Vector Institute {kawin, duvenaud, gh}@cs.toronto.edu Abstract Word embeddings are often criticized for capturing undesirable word associations such as gender stereotypes. However, methods for measuring and removing such biases remain poorly understood. We show that for any embedding model that implicitly does matrix factorization, debiasing vectors post hoc using subspace projection (Bolukbasi et al., 2016) is, under certain conditions, equivalent to training on an unbiased corpus. We also prove that WEAT, the most common association test for word embeddings, systematically overestimates bias. Given that the subspace projection method is provably effective, we use it to derive a new measure of association called the relational inner product association (RIPA). Experiments with RIPA reveal that, on average, skipgram with negative sampling (SGNS) does not make most words any more gendered than they are in the training corpus. However, for gender-stereotyped words, SGNS actually amplifies the gender association in the corpus. 1 Introduction A common criticism of word embeddings is that they capture undesirable associations in vector space. In addition to gender-appropriate analogies such as king:queen::man:woman, stereotypical analogies such as doctor:nurse::man:woman also hold in SGNS embedding spaces (Bolukbasi et al., 2016). Caliskan et al. (2017) created an association test for word vectors called WEAT, which uses cosine similarity to measure how associated words are with respect to two sets of attribute words (e.g., ‘male’ vs. ‘female’). For example, they claimed that science-related words were significantly more associated with male attributes and art-related words with female ones. Since these associations are socially undesirable, they were described as gender bias. Despite these remarkable findings, such undesirable word associations remain poorly understood. For one, what causes them – is it biased training data, the embedding model itself, or just noise? Why should WEAT be the test of choice for measuring associations in word embeddings? Bolukbasi et al. (2016) found that word vectors could be debiased by defining a “bias subspace” in the embedding space and then subtracting from each vector its projection on this subspace. But what theoretical guarantee is there that this method actually debiases vectors? In this paper, we answer several of these open questions. We begin by proving that for any embedding model that implicitly does matrix factorization (e.g., GloVe, SGNS), debiasing vectors post hoc via subspace projection is, under certain conditions, equivalent to training on an unbiased corpus without reconstruction error. We find that contrary to what Bolukbasi et al. (2016) suggested, word embeddings should not be normalized before debiasing, as vector length can contain important information (Ethayarajh et al., 2018). To guarantee unbiasedness, the bias subspace should also be the span – rather than a principal component – of the vectors used to define it. If applied this way, the subspace projection method can be used to provably debias SGNS and GloVe embeddings with respect to the word pairs that define the bias subspace. Using this notion of a “bias subspace”, we then prove that WEAT, the most common association test for word embeddings, has theoretical flaws that cause it to systematically overestimate bias. At least for SGNS and GloVe, it implicitly requires the two sets of attribute words (e.g., ‘male’ vs. ‘female’) to occur with equal frequency in the training corpus; when they do not, even gender-neutral words can be classified as gender-biased, for example. The outcome of a WEAT test can also 1697 be easily manipulated by contriving the attribute word sets, allowing virtually any word – even a gender-neutral one such as ‘door’ – to be classified as male- or female-biased relative to another gender-neutral word. Given that subspace projection removal provably debiases embeddings, we use it to derive a new measure of association in word embeddings called the relational inner product association (RIPA). Given a set of ordered word pairs (e.g., {(‘man’, ‘woman’), (‘male’, ‘female’)}), we take the first principal component of all the difference vectors, which we call the relation vector ⃗b. In Bolukbasi et al.’s terminology,⃗b would be a one-dimensional bias subspace. Then, for a word vector ⃗w, the relational inner product is simply ⟨⃗w,⃗b⟩. Because RIPA is intended for embedding models that implicitly do matrix factorization, it has an information theoretic interpretation. This allows us to directly compare the actual word association in embedding space with what we would expect the word association to be, given the training corpus. Making such comparisons yields several novel insights: 1. SGNS does not, on average, make the vast majority of words any more gendered in the vector space than they are in the training corpus; individual words may be slightly more or less gendered due to reconstruction error. However, for words that are genderstereotyped (e.g., ‘nurse’) or gender-specific by definition (e.g., ‘queen’), SGNS amplifies the gender association in the training corpus. 2. To use the subspace projection method, one must have prior knowledge of which words are gender-specific by definition, so that they are not also debiased. Debiasing all vectors can preclude gender-appropriate analogies such as king:queen::man:woman from holding in the embedding space. In contrast to the supervised method proposed by Bolukbasi et al. (2016) for identifying these gender-specific words, we introduce an unsupervised method. Ours is much more effective at preserving gender-appropriate analogies and precluding gender-biased ones. To allow a fair comparison with prior work, our experiments in this paper focus on gender association. However, our claims extend to other types of word associations as well, which we leave as future work. 2 Related Work Word Embeddings Word embedding models generate distributed representations of words in a low-dimensional continuous space. This is generally done using: (a) neural networks that learn embeddings by predicting the contexts words appear in, or vice-versa (Bengio et al., 2003; Mikolov et al., 2013; Collobert and Weston, 2008); (b) low-rank approximations of word-context matrices containing a co-occurrence statistic (Landauer and Dumais, 1997; Levy and Goldberg, 2014). The objective of SGNS is to maximize the probability of observed word-context pairs and to minimize the probability of k randomly sampled negative examples. Though no co-occurrence statistics are explicitly calculated, Levy and Goldberg (2014) proved that SGNS is implicitly factorizing a word-context PMI matrix shifted by −logk. Similarly, GloVe implicitly factorizes a log cooccurrence count matrix (Pennington et al., 2014). Word Analogies A word analogy a:b::x:y asserts that “a is to b as x is to y” and holds in the embedding space iff ⃗a + (⃗y −⃗x) =⃗b. Ethayarajh et al. (2018) proved that for GloVe and SGNS, a:b::x:y holds exactly in an embedding space with no reconstruction error iff the words are coplanar and the co-occurrence shifted PMI is the same for each word pair and across both word pairs. Word analogies are often used to signify that semantic and syntactic properties of words (e.g., verb tense, gender) can be captured as linear relations. Measuring Associations Caliskan et al. (2017) proposed what is now the most commonly used association test for word embeddings. The word embedding association test (WEAT) uses cosine similarity to measure how associated two given sets of target words are with respect to two sets of attribute words (e.g., ‘male’ vs. ‘female’). For example, Caliskan et al. (2017) claimed that sciencerelated words are more associated with ‘male’ than ‘female’ attributes compared to art-related words, and that this was statistically significant. However, aside from some intuitive results (e.g., that female names are associated with female attributes), there is little evidence that WEAT is a good measure of association. 1698 Debiasing Embeddings Bolukbasi et al. (2016) claimed that the existence of stereotypical analogies such as doctor:nurse::man:woman constituted gender bias. To prevent such analogies from holding in the vector space, they subtracted from each biased word vector its projection on a “gender bias subspace”. This subspace was defined by the first m principal components for ten gender relation vectors (e.g., ⃗ man − ⃗ woman). Each debiased word vector was thus orthogonal to the gender bias subspace and its projection on the subspace was zero. While this subspace projection method precluded gender-biased analogies from holding in the embedding space, Bolukbasi et al. (2016) did not provide any theoretical guarantee that the vectors were unbiased (i.e., equivalent to vectors that would be obtained from training on a gender-agnostic corpus with no reconstruction error). Other work has tried to learn gender-neutral embeddings from scratch (Zhao et al., 2018), despite this approach requiring custom changes to the objective of each embedding model. 3 Provably Debiasing Embeddings Experiments by Bolukbasi et al. (2016) found that debiasing word embeddings using the subspace projection method precludes gender-biased analogies from holding. However, as we noted earlier, despite this method being intuitive, there is no theoretical guarantee that the debiased vectors are perfectly unbiased or that the debiasing method works for embedding models other than SGNS. In this section, we prove that for any embedding model that does implicit matrix factorization (e.g., GloVe, SGNS), debiasing embeddings post hoc using the subspace projection method is, under certain conditions, equivalent to training on a perfectly unbiased corpus without reconstruction error. Definition 1 Let M denote the symmetric wordcontext matrix for a given training corpus that is implicitly or explicitly factorized by the embedding model. Let S denote a set of word pairs. A word w is unbiased with respect to S iff ∀(x,y) ∈S,Mw,x = Mw,y. M is unbiased with respect to S iff ∀w ̸∈S, w is unbiased. A word w or matrix M is biased wrt S iff it is not unbiased wrt S. Note that Definition 1 does not make any distinction between socially acceptable and socially unacceptable associations. A word that is genderspecific by definition and a word that is genderbiased due to stereotypes would both be considered biased by Definition 1, although only the latter is undesirable. For example, by Definition 1, ‘door’ would be unbiased with respect to the set {(‘male’, ‘female’)} iff the entries for Mdoor,male and Mdoor,female were interchangeable. The entire corpus would be unbiased with respect to the set iff Mw,male and Mw,female were interchangeable for any word w. Since M is a word-context matrix containing a co-occurrence statistic, unbiasedness effectively means that the elements for (w,‘male’) and (w,‘female’) in M can be switched without any impact on the embeddings. M is factorized into a word matrix W and context matrix C such that WCT = M, with the former giving us our word embeddings. Debiasing Theorem For a set of word pairs S, let the bias subspace B = span({⃗x−⃗y|(x,y) ∈S}). For every word w ̸∈S, let ⃗wd ≜⃗w −projB⃗w. The reconstructed word-context matrix WdCT = Md is unbiased with respect to S. Proof of Theorem When there is no reconstruction error, we know from Definition 1 that a word w is unbiased with respect to a set of word pairs S iff ∀(x,y) ∈S Mw,x = Mw,y ⇐⇒⟨⃗w,⃗xc⟩= ⟨⃗w,⃗yc⟩ ⇐⇒⟨⃗w,⃗xc −⃗yc⟩= 0 (1) From Lemma 2 of Ethayarajh et al. (2018), we also know that under perfect reconstruction, ∃λ ∈ R,C = λW. For a detailed explanation, we refer the reader to the proof of that lemma. In short, if a linear word analogy holds over S (i.e., the word pairs have the same difference vector), then there exists a real symmetric matrix A that maps W to C. A’s eigenvectors form a basis for the word space but A can only have non-distinct eigenvalues if the relative geometry of the word space is to be preserved. All word vectors must therefore lie in the same eigenspace, with eigenvalue λ. This implies that for any word w and any (x,y) ∈S, ∃λ ∈R,⟨⃗w,⃗xc −⃗yc⟩= λ ⟨⃗w,⃗x−⃗y⟩ (2) Each debiased word vector wd is orthogonal to the bias subspace in the word embedding space, so ∀(x,y) ∈S,⟨⃗wd,⃗x−⃗y⟩= 0. In conjunction with (2), this implies that ∀(x,y) ∈S,λ ⟨⃗wd,⃗x−⃗y⟩= ⟨⃗wd,⃗xc −⃗yc⟩= 0. This means that if a debiased word w is represented with vector ⃗wd instead of ⃗w, it is unbiased with respect to S by Definition 1699 1. This implies that the co-occurrence matrix Md that is reconstructed using the debiased word matrix Wd is also unbiased with respect to S. The subspace projection method is therefore far more powerful than initially stated in Bolukbasi et al. (2016): not only can it be applied to any embedding model that implicitly does matrix factorization (e.g., GloVe, SGNS), but debiasing word vectors in this way is equivalent to training on a perfectly unbiased corpus when there is no reconstruction error. However, word vectors should not be normalized prior to debiasing, since the matrix that is factorized by the embedding model cannot necessarily be reconstructed with normalized embeddings. Unbiasedness with respect to word pairs S is also only guaranteed when the bias subspace B = span({⃗x−⃗y|(x,y) ∈S}). Because we define unbiasedness with respect to a set of word pairs, we cannot make any claims about word pairs outside that set. For example, consider the set S = {(‘man’,‘woman’)}. If we define a bias subspace using S and use it to debias ⃗w, we can only say definitively that ⃗w is unbiased with respect to S. We cannot claim, for example, that ⃗w is also unbiased with respect to {(‘policeman’,‘policewoman’)}, because it is possible that ⃗ policewoman − ⃗ policeman ̸= ⃗ woman−⃗ man. Debiasing ⃗w with respect to a nonexhaustive set of gender-defining word pairs is not equivalent to erasing all vestiges of gender from ⃗w. This may explain why it is still possible to cluster words by gender after debiasing them using a handful of gender-defining word pairs (Gonen and Goldberg, 2019). 4 The Flaws of WEAT Given attribute word sets X and Y (e.g., {‘male’, ‘man’} vs. {‘female’, ‘woman’}), WEAT uses a cosine similarity-based measurement to capture whether two target word sets have the same relative association to both sets of attribute words. At the heart of WEAT is the statistic s(w,X,Y), which "measures the association of [a word] w with the attribute" (Caliskan et al., 2017): s(w,X,Y) = EX cos(⃗w,⃗x)−EY cos(⃗w,⃗y) (3) The normalized difference between the mean values of s(w,X,Y) across the two target word sets is called the effect size. For the sake of simplicity, we consider the case where both attribute word sets contain a single word (i.e., X = {x},Y = {y}). Proposition 1 Let X = {x},Y = {y}, and w be unbiased with respect to {(x,y)} by Definition 1. According to WEAT, an SGNS vector ⃗w is equally associated with X and Y under perfect reconstruction iff p(x) = p(y). Both theoretical and empirical work have found the squared word embedding norm to be linear in the log probability of the word. (Arora et al., 2016; Ethayarajh et al., 2018). Where α1,α2 ∈R, w is then equally associated with X and Y if 0 = cos(⃗w,⃗x)−cos(⃗w,⃗y) = 1 ∥⃗w∥2 ⟨⃗w,⃗x⟩ ∥⃗x∥2 −⟨⃗w,⃗y⟩ ∥⃗y∥2  = ⟨⃗w,⃗x⟩ p α1 log p(x)+α2 − ⟨⃗w,⃗y⟩ p α1 log p(y)+α2 (4) By the Debiasing Theorem, w is unbiased with respect to the set {(x,y)} iff ⟨⃗w,⃗x⟩= ⟨⃗w,⃗y⟩. Therefore (4) holds iff p(x) = p(y). Thus for w to be equally associated with both sets of attribute words, not only must w be unbiased with respect to {(x,y)} by Definition 1, but words x and y must also occur with equal frequency in the corpus. Despite this being implicitly required, it was not stated as a requirement in Caliskan et al. (2017) for using WEAT. If the embedding model were GloVe instead of SGNS, this requirement would still apply, since GloVe implicitly factorizes a log cooccurrence count matrix (Pennington et al., 2014) while SGNS implicitly factorizes the shifted PMI matrix (Levy and Goldberg, 2014). This, in turn, means that the test statistic and effect size of WEAT can be non-zero even when each set of target words is unbiased with respect to the attribute words. In practice, this issue often goes unnoticed because each word in the attribute set, at least for gender association, has a counterpart that appears with roughly equal frequency in most training corpora (e.g., ‘man’ vs. ‘woman’, ‘boy’ vs. ‘girl’). However, this is not guaranteed to hold, especially for more nebulous attribute sets (e.g., ‘pleasant’ vs. ‘unpleasant’ words). Proposition 2 Let X = {x},Y = {y}, and the target word sets be T1 = {w1},T2 = {w2}. Regardless of what the target words are, the effect size of their association with X and Y is maximal in one direction, according to WEAT. In this scenario, the effect size of the association is 2 (i.e., the maximum) in one of the two directions: either w1 is more associated with X than Y, 1700 Target Word Sets Attribute Word Sets Test Statistic Effect Size p-value Outcome (WEAT) {masculine} vs. {feminine} 0.021 2.0 0.0 more male-associated {door} vs. {curtain} {girlish} vs. {boyish} −0.042 −2.0 0.5 inconclusive {woman} vs. {man} 0.071 2.0 0.0 more female-associated {masculine} vs. {feminine} 0.063 2.0 0.0 more male-associated {dog} vs. {cat} {actress} vs. {actor} −0.075 −2.0 0.5 inconclusive {womanly} vs. {manly} 0.001 2.0 0.0 more female-associated {masculine} vs. {feminine} 0.017 2.0 0.0 more male-associated {bowtie} vs. {corsage} {woman} vs. {masculine} −0.071 −2.0 0.5 inconclusive {girly} vs. {masculine} 0.054 2.0 0.0 more female-associated Table 1: By contriving the male and female attribute words, we can easily manipulate WEAT to claim that a given target word is more female-biased or male-biased than another. For example, in the top row, ⃗ door is more maleassociated than ⃗ curtain when the attribute words are ‘masculine’ and ‘feminine’, but it is more female-associated when the attribute words are ‘woman’ and ‘man’. In both cases, the associations are highly statistically significant. or w2 is. This is because the numerator of the effect size is the difference between s(w1,X,Y) and s(w2,X,Y), while the denominator is the standard deviation of {s(w,X,Y)|w ∈T1 ∪T2}, which simplifies to p (s(w1,X,Y)−s(w2,X,Y))2/4. This means that the effect size is necessarily 2 in one direction and −2 in the other; it is at its maximum regardless of how small individual similarities are. This also means that we can contrive the attribute word sets to achieve a desired outcome. For example, when the attribute word sets are {‘masculine’} and {‘feminine’}, ⃗ door is significantly more male-associated than ⃗ curtain. When the attribute sets are {‘woman’} and {‘man’}, the opposite is true: ⃗ door is significantly more femaleassociated than ⃗ curtain. In Table 1, we provide more examples of how we can easily contrive the attribute sets to claim, with high statistical significance, that a given target word is more femalebiased or male-biased than another. Conversely, we can also manipulate the attribute sets to claim that an association is not statistically significant (p = 0.5), despite a large effect size. Broadly speaking, cosine similarity is a useful measure of vector similarity and hypothesis tests are useful for testing sample differences. Because of this, WEAT seems to be an intuitive measure. However, as shown in Propositions 1 and 2, there are two key theoretical flaws to WEAT that cause it to overestimate the degree of association and ultimately make it an inappropriate metric for word embeddings. The only other metric of note quantifies association as |cos(⃗w,⃗b)|c, where⃗b is the bias subspace and c ∈R the “strictness” of the measurement (Bolukbasi et al., 2016). For the same reason discussed in Proposition 1, this measure can also overestimate the degree of association. 5 Relational Inner Product Association Given the theoretical flaws of WEAT, we derive a new measure of word embedding association using the subspace projection method, which can provably debias embeddings (section 3). Definition 2 The relational inner product association β(⃗w;⃗b) of a word vector ⃗w ∈V with respect to a relation vector ⃗b ∈V is ⟨⃗w,⃗b⟩. Where S is a non-empty set of ordered word pairs (x,y) that define the association,⃗b is the first principal component of {⃗x−⃗y | (x,y) ∈S}. Our metric, the relational inner product association (RIPA), is simply the inner product of a relation vector describing the association and a given word vector in the same embedding space. To use the terminology in Bolukbasi et al. (2016), RIPA is the scalar projection of a word vector onto a onedimensional bias subspace defined by the unit vector⃗b. In their experiments, Bolukbasi et al. (2016) defined⃗b as the first principal component for a set of gender difference vectors (e.g., ⃗ man− ⃗ woman). This would be the means of deriving⃗b for RIPA as well. For the sake of interpretability, we do not define⃗b as the span of difference vectors, as would be required if one were using⃗b to provably debias words with respect to S (see section 3). When⃗b is a vector, the sign of ⟨⃗w,⃗b⟩indicates the direction of the association (e.g., male or female, depending on the order of the word pairs). For higher dimensional bias subspaces, the sign of the projection cannot be interpreted in the same way. Also, as noted earlier, bias vectors are what are typically used to debias words in practice. As we show in the rest of this section, the interpretability of RIPA, its robustness to how the relation vector is defined, 1701 and its derivation from a method that provably debiases word embeddings are the key reasons why it is an ideal replacement for WEAT. Given that RIPA can be used for any embedding model that does matrix factorization, it is applicable to common embedding models such as SGNS and GloVe. 5.1 Interpreting RIPA If only a single word pair (x,y) defines the association, then the relation vector⃗b = (⃗x −⃗y)/∥⃗x − ⃗y∥, making RIPA highly interpretable. Given that RIPA is intended for embedding models that factorize a matrix M containing a co-occurrence statistic (e.g., the shifted word-context PMI matrix for SGNS), if we assume that there is no reconstruction error, we can rewrite β(⃗w;⃗b) in terms of M. Where x and y have context vectors ⃗xc and ⃗yc, λ ∈R is such that C = λW (see Lemma 2, Ethayarajh et al. (2018)), α ∈R−is a model-specific constant, and there is no reconstruction error: βSGNS(⃗w;⃗b) = (1/λ)⟨⃗w,⃗xc −⃗yc⟩ ∥⃗x−⃗y∥ = (1/λ)(PMI(x,w)−PMI(y,w)) p (1/λ)(−csPMI(x,y)+α) = 1/ √ λ p −csPMI(x,y)+α log p(w|x) p(w|y) (5) Here, csPMI(x,y) ≜PMI(x,y) + log p(x,y) and is equal to −λ∥⃗x−⃗y∥2 2+α under perfect reconstruction (Ethayarajh et al., 2018). There are three notable features of this result: 1. Ethayarajh et al. (2018) proved the conjecture by Pennington et al. (2014) that a word analogy holds over a set of words pairs (x,y) iff for every word w, log[p(w|x)/p(w|y)] is the same for every word pair (x,y). The expression in (5) is a multiple of this term. 2. Assuming no reconstruction error, if a linear word analogy holds over a set of ordered word pairs (x,y), then the co-occurrence shifted PMI (csPMI) should be the same for every word pair (Ethayarajh et al., 2018). The more x and y are unrelated, the closer that csPMI(x,y) is to −∞and β(⃗w;⃗b) is to 0. This prevents RIPA from overestimating the extent of the association simply because x and y are far apart in embedding space. 3. Because⃗b is a unit vector, β(⃗w;⃗b) is bounded in [−∥⃗w∥,∥⃗w∥]. This means that one can calculate a word’s association with respect to multiple relation vectors and then compare the resulting RIPA values. These points highlight just how robust RIPA is to the definition of ⃗b. As long as a word analogy holds over the word pairs that define the association – i.e., as long as the word pairs have roughly the same difference vector – the choice of word pair does not affect log[p(w|x)/p(w|y)] or csPMI(x,y). Using (‘king’, ‘queen’) instead of (‘man’, ‘woman’) to define the gender relation vector, for example, would have a negligible impact. In contrast, as shown in section 4, the lack of robustness of WEAT to the choice of attribute sets is one reason it is so unreliable. We can also interpret β(⃗w;⃗b) for other embedding models, not just SGNS. Where Xx,y denotes the frequency of a word pair (x,y) and zx,zy denote the learned bias terms for GloVe: βGloVe(⃗w;⃗b) = C  log p(x,w) p(y,w) −zx +zy  where C = 1/ √ λ p −csPMI(x,y)+α (6) Because the terms zx,zy are learned, β(⃗w;⃗b) is not as interpretable for GloVe. However, Levy et al. (2015) have conjectured that, in practice, zx,zy may be equivalent to the log counts of x and y respectively, in which case βGloVe = βSGNS. 5.2 Statistical Significance Unlike with WEAT, there is no notion of statistical significance attached to RIPA. There is a simple reason for this. Whether a word vector ⃗w is spuriously or non-spuriously associated with respect to a relation vector (⃗x −⃗y)/∥⃗x −⃗y∥depends on how frequently (w,x) and (w,y) co-occur in the training corpus; the more co-occurrences there are, the less likely the association is spurious. As shown in experiments by Ethayarajh et al. (2018), the reconstruction error for any word pair (x,y) follows a zero-centered normal distribution where the variance is a decreasing function of Xx,y. Word embeddings alone are thus not enough to ascribe a statistical significance to the association. This also suggests that the notion of statistical significance in WEAT is disingenuous, as it ignores how the spuriousness of an association depends on cooccurrence frequency in the training corpus. 1702 Word Type Word Genderedness in Corpus Genderedness in Embedding Space Change (abs.) mom −0.163 −0.648 0.485 dad 0.125 0.217 0.092 Gender-Appropriate queen −0.365 −0.826 0.462 (n = 164) king 0.058 0.200 0.142 Avg (abs.) 0.231 0.522 0.291 nurse −0.190 −1.047 0.858 doctor −0.135 −0.059 −0.077 Gender-Biased housekeeper −0.132 −0.927 0.795 (n = 68) architect −0.063 0.162 0.099 Avg (abs.) 0.253 0.450 0.197 ballpark 0.254 0.050 −0.204 calf −0.039 0.027 −0.012 Gender-Neutral hormonal −0.326 −0.551 0.225 (n = 200) speed 0.036 −0.005 −0.031 Avg (abs.) 0.125 0.119 −0.006 Table 2: On average, SGNS makes gender-appropriate words (e.g., ‘queen’) and gender-biased words (e.g., ‘nurse’) more gendered in the embedding space than they are in the training corpus. As seen in the last column (in bold), the average change in absolute genderedness is 0.291 and 0.197 respectively (p < 0.001 for both). For gender-neutral words, the average change is only −0.006 (p = 0.84): SGNS does not make them any more gendered. 6 Experiments With our experiments, we address two open questions. For one, how much of the gender association in an embedding space is due to the embedding model itself, how much is due to the training corpus, and how much is just noise? Secondly, how can we debias gender-biased words (e.g., ‘doctor’, ‘nurse’) but not gender-appropriate ones (e.g., ‘king’, ‘queen’) without a priori knowledge of which words belong in which category? 6.1 Setup For our experiments, we use SGNS embeddings trained on Wikipedia, since RIPA is highly interpretable for SGNS (see section 5.1). This means that for any given word in the vocabulary, we can compare its gender association in the training corpus to its gender association in the embedding space, which should be equal under perfect reconstruction. Words are grouped into three categories with respect to gender: biased, appropriate, and neutral. We create lists of biased and appropriate words using the Bolukbasi et al. (2016) lists of gender-biased and gender-appropriate analogies. For example, doctor:nurse::man:woman is biased, so we classify the first two words as biased. The last category, neutral, contains uniformly randomly sampled words that appear at least 10K times in the corpus and that are not in either of the other categories, and which we therefore expect to be gender-agnostic. 6.2 Breaking down Gender Association For any given word, the gender association in the training corpus is what the gender association in the embedding space would be if there were no reconstruction error. By comparing these two quantities, we can infer the change induced by the embedding model. Let g(w;x,y) denote the RIPA of a word w with respect to the gender relation vector defined by word pair (x,y), let ˆg(w;x,y) denote what g(w;x,y) would be under perfect reconstruction for an SGNS embedding model, and let ∆g denote the change in absolute gender association from corpus to embedding space. Where S is a set of gender-defining word pairs1 from Bolukbasi et al. (2016) and λ,α are the model-specific constants defined in section 5.1, g(w;x,y) = ⟨⃗w,⃗x−⃗y⟩ ∥⃗x−⃗y∥ ˆg(w;x,y) = 1/ √ λ p −csPMI(x,y)+α log p(w|x) p(w|y) ∆g(w;S) = ∑ (x,y)∈S g(w;x,y) |S| − ∑ (x,y)∈S ˆg(w;x,y) |S| (7) We take the absolute value of each term because the embedding model may make a word more gendered, but in the direction opposite of what is implied in the corpus. λ ←1 because we expect 1 The set of gender-defining pairs we used is {(‘woman’, ‘man’), (‘girl’, ‘boy’), (‘she’, ‘he’), (‘mother’, ‘father’), (‘daughter’, ‘son’), (‘gal’, ‘guy’), (‘female’, ‘male’), (‘her’, ‘his’), (‘herself’, ‘himself’), (‘mary’, ‘john’)}. 1703 Figure 1: Before debiasing words using subspace projection, one needs to identify which words are genderappropriate – to avoid debiasing them. The Bolukbasi et al. (2016) method of identifying these words is ineffective: it ends up precluding most gender-appropriate analogies (dotted line, left) while preserving most gender-biased analogies (dotted line, right). Our unsupervised method (dashed line) does much better in both respects. λ ≈1 in practice (Ethayarajh et al., 2018; Mimno and Thompson, 2017). Similarly, α ←−1 because it minimizes the difference between ∥⃗x−⃗y∥ and its information theoretic interpretation over the gender-defining word pairs in S, though this is an estimate and may differ from the true value of α. In Table 2, we list the gender association in the training corpus (g(w)), the gender association in embedding space ( ˆg(w)), and the absolute change (∆g(w)) for each group of words. On average, the SGNS embedding model does not make gender-neutral words any more gendered than they are in the training corpus. Given that much of the vocabulary falls into this category, this means that the embedding model does not systematically change the genderedness of most words. However, because of reconstruction error, individual words may be more or less gendered in the embedding space, simply due to chance. In contrast, for words that are either gender-biased or genderappropriate, on average, the embedding model actually amplifies the gender association in the corpus. For example, for the word ‘king’, which is gender-specific by definition, the association is 0.058 in the corpus and 0.200 in the embedding space – it becomes more male-associated. For the word ‘nurse’, which is gender-biased, the association is −0.190 in the corpus and −1.047 in the embedding space – it becomes more femaleassociated. On average, the amplification is much greater for gender-appropriate words than it is for gender-biased ones, although the latter are more gendered in the corpus itself. In both cases, the change in absolute genderedness is statistically significant (p < 0.001). This amplification effect is unsurprising and can be explained by second-order similarity. Two words can be nearby in a word embedding space if they co-occur frequently in the training corpus (first-order similarity) or if there exists a large set of context words with which they both frequently co-occur (second-order similarity). The latter explains why words like ‘Toronto’ and ‘Melbourne’ are close to each other in embedding space; both are cities that appear in similar contexts. In an environment with some reconstruction error, such as low-dimensional embedding spaces, secondorder similarity permits words to be closer in embedding space than would be the case if only first-order similarity had an effect. As a result, λ⟨⃗ king, ⃗ man⟩> (PMI(‘king’,‘man’) −logk) for SGNS, for example. What is often treated as a useful property of word embeddings can have, with respect to gender bias, a pernicious effect. 6.3 Debiasing without Supervision To use the subspace projection method (Bolukbasi et al., 2016), one must have prior knowledge of which words are gender-appropriate, so that they are not debiased. Debiasing all vectors can preclude gender-appropriate analogies such as king:queen::man:woman from holding in the embedding space. To create an exhaustive list of gender-appropriate words, Bolukbasi et al. (2016) started with a small, human-labelled set of words and then trained an SVM to predict more gender1704 appropriate terms in the vocabulary. This bootstrapped list of gender-appropriate words was then left out during debiasing. The way in which Bolukbasi et al. (2016) evaluated their method is unorthodox: they tested the ability of their debiased embedding space to generate new analogies. However, this does not capture whether gender-appropriate analogies are successfully preserved and gender-biased analogies successfully precluded. In Figure 1, we show how the number of appropriate and biased analogies changes after debiasing. The x-axis captures how strongly gendered the analogy is, using the absolute RIPA value |β(⃗w;⃗b)| but replacing ⃗w with the difference vector defined by the first word pair (e.g., ⃗ king− ⃗ queen). The y-axis captures the number of analogies that meet that threshold. As seen in Figure 1, Bolukbasi et al.’s bootstrapped list of gender-appropriate words yields the opposite of what is intended: it ends up precluding most gender-appropriate analogies and preserving most gender-biased ones. This is not the fault of the debiasing method; rather, it is the result of failing to correctly identify which words in the vocabulary are gender-appropriate. For example, the bootstrapped list2 includes ‘wolf_cub’ and ‘Au_Lait’ as gender-appropriate terms, even though they are not. Conversely, it fails to include common gender-appropriate words such as ‘godfather’. This problem highlights how finding the right words to debias is as important as the debiasing itself. We propose an unsupervised method for finding gender-appropriate words. We first create a gender-defining relation vector ⃗b∗by taking the first principal component of gender-defining difference vectors such as ⃗ man− ⃗ woman. Using difference vectors from biased analogies, such as ⃗ doctor − ⃗ midwife, we then create a bias-defining relation vector⃗b′ the same way. We then debias a word w using the subspace projection method iff it satisfies |β(⃗w;⃗b∗)|< |β(⃗w;⃗b′)|. As seen in Figure 1, this simple condition is sufficient to preserve almost all gender-appropriate analogies while precluding most gender-biased ones. In our debiased embedding space, 94.9% of gender-appropriate analogies with a strength of at least 0.5 are preserved in the embedding space while only 36.7% of gender-biased analogies are. In contrast, the Bolukbasi et al. (2016) approach 2Available at https://github.com/tolga-b/debiaswe preserves only 16.5% of appropriate analogies with a strength of at least 0.5 while preserving 80.0% of biased ones. Recall that we use the same debiasing method as Bolukbasi et al. (2016); the difference in performance can only be ascribed to how we choose the gender-appropriate words. Combining our heuristic with other methods may yield even better results, which we leave as future work. 7 Conclusion In this paper, we answered several open questions about undesirable word associations in embedding spaces. We found that for any embedding model that implicitly does matrix factorization (e.g., SGNS, GloVe), debiasing with the subspace projection method is, under certain conditions, equivalent to training on a corpus that is unbiased with respect to the words defining the bias subspace. We proved that WEAT, the most common test of word embedding association, has theoretical flaws that cause it to systematically overestimate bias. For example, by contriving the attribute sets for WEAT, virtually any word can be classified as gender-biased relative to another. We then derived a new measure of association in word embeddings called the relational inner product association (RIPA). Using RIPA, we found that SGNS does not, on average, make most words any more gendered in the embedding space than they are in the training corpus. However, for words that are gender-biased or gender-specific by definition, SGNS amplifies the genderedness in the corpus. Acknowledgments We thank the anonymous reviewers for their insightful comments. We thank the Natural Sciences and Engineering Research Council of Canada (NSERC) for their financial support. References Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. A latent variable model approach to PMI-based word embeddings. Transactions of the Association for Computational Linguistics, 4:385–399. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3(Feb):1137–1155. 1705 Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in Neural Information Processing Systems, pages 4349–4357. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, pages 160–167. ACM. Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2018. Towards understanding linear word analogies. arXiv preprint arXiv:1810.04882. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv preprint arXiv:1903.03862. Thomas K Landauer and Susan T Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological review, 104(2):211. Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Advances in Neural Information Processing Systems, pages 2177–2185. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. David Mimno and Laure Thompson. 2017. The strange geometry of skip-gram with negative sampling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2873–2878. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and KaiWei Chang. 2018. Learning gender-neutral word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4847–4853.
2019
166
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1706–1716 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1706 Unsupervised Discovery of Gendered Language through Latent-Variable Modeling Alexander Hoyle@ Lawrence Wolf-SonkinS Hanna WallachZ Isabelle AugensteinP Ryan CotterellH @University College London, London, UK SDepartment of Computer Science, Johns Hopkins University, Baltimore, USA ZMicrosoft Research, New York City, USA HDepartment of Computer Science and Technology, University of Cambridge, Cambridge, UK PDepartment of Computer Science, University of Copenhagen, Copenhagen, Denmark [email protected], [email protected] [email protected], [email protected], [email protected] Abstract Studying the ways in which language is gendered has long been an area of interest in sociolinguistics. Studies have explored, for example, the speech of male and female characters in film and the language used to describe male and female politicians. In this paper, we aim not to merely study this phenomenon qualitatively, but instead to quantify the degree to which the language used to describe men and women is different and, moreover, different in a positive or negative way. To that end, we introduce a generative latent-variable model that jointly represents adjective (or verb) choice, with its sentiment, given the natural gender of a head (or dependent) noun. We find that there are significant differences between descriptions of male and female nouns and that these differences align with common gender stereotypes: Positive adjectives used to describe women are more often related to their bodies than adjectives used to describe men. 1 Introduction Word choice is strongly influenced by gender— both that of the speaker and that of the referent (Lakoff, 1973). Even within 24 hours of birth, parents describe their daughters as beautiful, pretty, and cute far more often than their sons (Rubin et al., 1974). To date, much of the research in sociolinguistics on gendered language has focused on laboratory studies and smaller corpora (McKee and Sherriffs, 1957; Williams and Bennett, 1975; Baker, 2005); however, more recent work has begun to focus on larger-scale datasets (Pearce, 2008; CaldasCoulthard and Moon, 2010; Baker, 2014; Norberg, 2016). These studies compare the adjectives (or beautiful lovely chaste gorgeous fertile beauteous sexy classy exquisite vivacious vibrant battered untreated barren shrewish sheltered heartbroken unmarried undernourished underweight uncomplaining nagging just sound righteous rational peaceable prodigious brave paramount reliable sinless honorable unsuitable unreliable lawless inseparable brutish idle unarmed wounded bigoted unjust brutal Male Positive Negative Female Positive Negative MISCELLANEOUS TEMPORAL SOCIAL FEELING SPATIAL QUANTITY BODY BEHAVIOR SUBSTANCE Figure 1: Adjectives, with sentiment, used to describe men and women, as represented by our model. Colors indicate the most common sense of each adjective from Tsvetkov et al. (2014); black indicates out of lexicon. Two patterns are immediately apparent: positive adjectives describing women are often related to their bodies, while positive adjectives describing men are often related to their behavior. These patterns hold generally and the differences are significant (see §4). verbs) that modify each noun in a particular gendered pair of nouns, such as boy–girl, aggregated across a given corpus. We extend this line of work by instead focusing on multiple noun pairs simultaneously, modeling how the choice of adjective (or verb) depends on the natural gender1 of the head 1A noun’s natural gender is the implied gender of its referent (e.g., actress refers to woman). We distinguish natural 1707 (or dependent) noun, abstracting away the noun form. To that end, we introduce a generative latentvariable model for representing gendered language, along with sentiment, from a parsed corpus. This model allows us to quantify differences between the language used to describe men and women. The motivation behind our approach is straightforward: Consider the sets of adjectives (or verbs) that attach to gendered, animate nouns, such as man or woman. Do these sets differ in ways that depend on gender? For example, we might expect that the adjective Baltimorean attaches to man roughly the same number of times as it attaches to woman, controlling for the frequency of man and woman.2 But this is not the case for all adjectives. The adjective pregnant, for example, almost always describes women, modulo the rare times that men are described as being pregnant with, say, emotion. Arguably, the gendered use of pregnant is benign—it is not due to cultural bias that women are more often described as pregnant, but rather because women bear children. However, differences in the use of other adjectives (or verbs) may be more pernicious. For example, female professors are less often described as brilliant than male professors (Storage et al., 2016), likely reflecting implicit or explicit stereotypes about men and women. In this paper, we therefore aim to quantify the degree to which the language used to describe men and women is different and, moreover, different in a positive or negative way. Concretely, we focus on three sociolinguistic research questions about the influence of gender on adjective and verb choice: Q1 What are the qualitative differences between the language used to describe men and women? For example, what, if any, are the patterns revealed by our model? Does the output from our model correlate with previous human judgments of gender stereotypes? Q2 What are the quantitative differences between the language used to describe men and women? For example, are adjectives used to describe women more often related to their bodies than adjectives used to describe men? Can we quantify such patterns using existing semantic resources (Tsvetkov et al., 2014)? gender from grammatical gender because the latter does not necessarily convey anything meaningful about the referent. 2Men are written about more often than women. Indeed, the corpus we use exhibits this trend, as shown in Tab. 1. Female Male other 2.2 other 6.8 daughter 1.4 husband 1.8 lady 2.4 king 2.1 wife 3.3 son 2.9 mother 4.2 father 4.2 girl 5.1 boy 5.1 woman 11.5 man 39.9 Total 30.2 62.7 Table 1: Counts, in millions, of male and female nouns present in the corpus of Goldberg and Orwant (2013). Q3 Does the overall sentiment of the language used to describe men and women differ? To answer these questions, we introduce a generative latent-variable model that jointly represents adjective (or verb) choice, with its sentiment, given the natural gender of a head (or dependent) noun. We use a form of posterior regularization to guide inference of the latent variables (Ganchev et al., 2010). We then use this model to study the syntactic n-gram corpus of (Goldberg and Orwant, 2013). To answer Q1, we conduct an analysis that reveals differences between descriptions of male and female nouns that align with common gender stereotypes captured by previous human judgements. When using our model to answer Q2, we find that adjectives used to describe women are more often related to their bodies (significant under a permutation test with p < 0.03) than adjectives used to describe men (see Fig. 1 for examples). This finding accords with previous research (Norberg, 2016). Finally, in answer to Q3, we find no significant difference in the overall sentiment of the language used to describe men and women. 2 What Makes this Study Different? As explained in the previous section, many sociolinguistics researchers have undertaken corpusbased studies of gendered language. In this section, we therefore differentiate our approach from these studies and from recent NLP research on gender biases in word embeddings and co-reference systems. Syntactic collocations and noun types. Following the methodology employed in previous sociolinguistic studies of gendered language, we use syntactic collocations to make definitive claims about gendered relationships between words. This approach stands in contrast to bag-of-words analyses, where information about gendered relationships must be indirectly inferred. By studying the 1708 adjectives and verbs that attach to gendered, animate nouns, we are able to more precisely quantify the degree to which the language used to describe men and women is different. To date, much of the corpus-based sociolinguistics research on gendered language has focused on differences between the adjectives (or verbs) that modify each noun in a particular gendered pair of nouns, such as boy– girl or man–woman (e.g., Pearce (2008); CaldasCoulthard and Moon (2010); Norberg (2016)). To assess the differences, researchers typically report top collocates3 for one word in the pair, exclusive of collocates for the other. This approach has the effect of restricting both the amount of available data and the claims that can be made regarding gendered nouns more broadly. In contrast, we focus on multiple noun pairs (including plural forms) simultaneously, modeling how the choice of adjective (or verb) depends on the natural gender of the head (or dependent) noun, abstracting away the noun form. As a result, we are able to make broader claims. The corpus of Goldberg and Orwant (2013). To extract the adjectives and verbs that attach to gendered, animate nouns, we use the corpus of Goldberg and Orwant (2013), who ran a then-state-of-the-art dependency parser on 3.5 million digitalized books. We believe that the size of this corpus (11 billion words) makes our study the largest collocational study of its kind. Previous studies have used corpora of under one billion words, such as the British National Corpus (100 million words) (Pearce, 2008), the New Model Corpus (100 million words) (Norberg, 2016), and the Bank of English Corpus (450 million words) (Moon, Rosamund, 2014). By default, the corpus of Goldberg and Orwant (2013) is broken down by year, but we aggregate the data across years to obtain roughly 37 million noun–adjectives pairs, 41 million NSUBJ–verb pairs, and 14 million DOBJ–verb pairs. We additionally lemmatize each word. For example, the noun stewardesses is lemmatized to a set of lexical features consisting of the genderless lemma STEWARD and the morphological features +FEM and +PL. This parsing and lemmatization process is illustrated in Fig. 2. Quantitative evaluation. Our study is also quantitative in nature: we test concrete hypotheses about differences between the language used to describe men and women. For example, we test whether 3Typically ranked by the log of the Dice coefficient. Figure 2: An example sentence with its labeled dependency parse (top) and lemmatized words (bottom). women are more often described using adjectives related to their bodies and emotions. This quantitative focus differentiates our approach from previous corpus-based sociolinguistics research on gendered language. Indeed, in the introduction to a special issue on corpus methods in the journal Gender and Language, Baker (2013) writes, “while the term corpus and its plural corpora are reasonably popular within Gender and Language (occurring in almost 40% of articles from issues 1-6), authors have mainly used the term as a synonym for ‘data set’ and have tended to carry out their analysis by hand and eye methods alone.” Moreover, in a related paper on extracting gendered language from word embeddings, Garg et al. (2018) lament that “due to the relative lack of systematic quantification of stereotypes in the literature [... they] cannot directly validate [their] results.” For an overview of quantitative evaluation, we recommend Baker (2014). Speaker versus referent. Many data-driven studies of gender and language focus on what speakers of different genders say rather than differences between descriptions of men and women. This is an easier task—the only annotation required is the gender of the speaker. For example, Ott (2016) used a topic model to study how word choice in tweets is influenced by the gender of the tweeter; Schofield and Mehr (2016) modeled gender in film dialog; and, in the realm of social media analysis, Bamman et al. (2014) discussed stylistic choices that enable classifiers to distinguish between tweets written by men versus women. Model versus data. Recent NLP research has focused on gender biases in word embeddings (Bolukbasi et al., 2016; Zhao et al., 2017) and co-reference systems (Zhao et al., 2018; Rudinger et al., 2018). These papers are primarily concerned with mitigating biases present in the output of machine learning models deployed in the real world (O’Neil, 2016). For example, Bolukbasi et al. (2016) used pairs 1709 of gendered words, such as she–he, to mitigate unwanted gender biases in word embeddings. Although it is possible to rank the adjectives (or verbs) most aligned with the embedding subspace defined by a pair of gendered words, there are no guarantees that the resulting adjectives (or verbs) were specifically used to describe men or women in the dataset from which the embeddings were learned. In contrast, we use syntactic collocations to explicitly represent gendered relationships between individual words. As a result, we are able make definitive claims about these relationships, thereby enabling us to answer sociolinguistic research questions. Indeed, it is this sociolinguistic focus that differentiates our approach from this line of work. 3 Modeling Gendered Language As explained in §1, our aim is quantify the degree to which the language used to describe men and women is different and, moreover, different in a positive or negative way. To do this, we therefore introduce a generative latent-variable model that jointly represents adjective (or verb) choice, with its sentiment, given the natural gender of a head (or dependent) noun. This model, which is based on the sparse additive generative model (SAGE; Eisenstein et al., 2011),4 enables us to extract ranked lists of adjectives (or verbs) that are used, with particular sentiments, to describe male or female nouns. We define G to be the set of gendered, animate nouns in our corpus and n ∈G to be one such noun. We represent n via a multi-hot vector fn ∈{0, 1}T of its lexical features—i.e., its genderless lemma, its gender (male or female), and its number (singular or plural). In other words, fn always has exactly three non-zero entries; for example, the only non-zero entries of fstewardesses are those corresponding to STEWARD, +FEM, and +PL. We define V to be the set of adjectives (or verbs) in our corpus and ν ∈V to be one such adjective (or verb). To simplify exposition, we refer to each adjective (or verb) that attaches to noun n as a neighbor of n. Finally, we define S = {POS, NEG, NEU} to be a set of three sentiments and s ∈S to be one such sentiment. Drawing inspiration from SAGE, our model jointly represents nouns, neighbors, and (latent) 4SAGE is a flexible alternative to latent Dirichlet allocation (LDA; Blei et al., 2003)—the most widely used statistical topic model. Our study could also have been conducted using LDA; drawing on SAGE was primarily a matter of personal taste. n s ν Figure 3: Graphical model depicting our model’s representation of nouns, neighbors, and (latent) sentiments. sentiments as depicted in Fig. 3. Specifically, p(ν, n, s) = p(ν | s, n) p(s | n) p(n). (1) The first factor in eq. (1) is defined as p(ν | s, n) ∝exp{mν + f ⊤ n η(ν, s)}, (2) where m ∈R|V| is a background distribution and η(ν, s) ∈RT is a neighbor- and sentiment-specific deviation. The second factor in eq. (1) is defined as p(s | n) ∝exp (ωn s ), (3) where ωn s ∈R, while the third factor is defined as p(n) ∝exp (ξn), (4) where ξn ∈R. We can then extract lists of neighbors that are used, with particular sentiments, to describe male and female nouns, ranked by scores that are a function of their deviations. For example, the score for neighbor ν when used, with positive sentiment, to describe a male noun is defined as τMASC-POS(ν) ∝exp{g⊤ MASCη(ν, POS)}, (5) where gMASC ∈{0, 1}T is a vector where only the entry that corresponds to +MASC is non-zero. Because our corpus does not contain explicit sentiment information, we marginalize out s: p(ν, n) = X s∈S p(ν | s, n) p(s | n) p(n). (6) This yields the following objective function: X n∈G X ν∈V ˆp(ν, n) log (p(ν, n)), (7) where ˆp(ν, n) ∝#(ν, n) is the empirical probability of neighbor ν and noun n in our corpus. To ensure that the latent variables in our model correspond to positive, negative, and neutral sentiments, we rely on posterior regularization (Ganchev et al., 2010). Given an additional distribution q(s | ν) that provides external information 1710 about the sentiment of neighbor ν, we regularize p(s | ν), as defined by our model, to be close (in the sense of KL-divergence) to q(s | ν). Specifically, we construct the following posterior regularizer: Rpost = KL(q(s | ν) || p(s | ν)) (8) = − X s∈S q(s | ν) log (p(s | ν)) + H(q), (9) where H(q) is constant and p(s | ν) is defined as p(s | ν) = X n∈G p(s, n | ν) (10) = X n∈G p(ν | n, s) p(s | n) p(n) p(ν) . (11) We use the combined sentiment lexicon of Hoyle et al. (2019) as q(s | ν). This lexicon represents each word’s sentiment as a three-dimensional Dirichlet distribution, thereby accounting for the relative confidence in the strength of each sentiment and, in turn, accommodating polysemous and rare words. By using the lexicon as external information in our posterior regularizer, we can control the extent to which it influences the latent variables. We add the regularizer in eq. (8) to the objective function in eq. (7), using a multiplier β to control the strength of the posterior regularization. We also impose an L1-regularizer α · ||η||1 to induce sparsity. The complete objective function is then X n∈G X ν∈V ˆp(ν, n) log (p(ν, n)) + α · ||η||1 + β · Rpost. (12) We optimize eq. (12) with respect to η(·, ·), ω, and ξ using the Adam optimizer (Kingma and Ba, 2015) with α and β set as described in §4. To ensure that the parameters are interpretable (e.g., to avoid a negative η(PREGNANT, NEG) canceling out a positive η(PREGNANT, POS))), we also constrain η(·, ·) to be non-negative, although without this constraint, our results are largely the same. Relationship to pointwise mutual information. Our model also recovers pointwise mutual information (PMI), which has been used previously to identify gendered language (Rudinger et al., 2017). Proposition 1. Consider the following restricted version of our model. Let fg ∈{0, 1}2 be a onehot vector that represents only the gender of a noun n. We write g instead of n, equivalence-classing all nouns as either MASC or FEM. Let η⋆(·) : V →R2 be the maximum-likelihood estimate for the special case of our model without (latent) sentiments: p(ν | g) ∝exp(mν + f ⊤ g η⋆(ν)). (13) Then, we have τg(ν) ∝exp(PMI(ν, g)). (14) Proof. See App. B. Proposition 1 says that if we use a limited set of lexical features (i.e., only gender) and estimate our model without any regularization or latent sentiments, then ranking the neighbors by τg(ν) (i.e., by their deviations from the background distribution) is equivalent to ranking them by their PMI. This proposition therefore provides insight into how our model builds on PMI. Specifically, in contrast to PMI, 1) our model can consider lexical features other than gender, 2) our model is regularized to avoid the pitfalls of maximumlikelihood estimation, and 3) our model cleanly incorporates latent sentiments, relying on posterior regularization to ensure that the p(s | ν) is close to the sentiment lexicon of Hoyle et al. (2019). 4 Experiments, Results, and Discussion We use our model to study the corpus of Goldberg and Orwant (2013) by running it separately on the noun–adjectives pairs, the NSUBJ–verb pairs, and the DOBJ–verb pairs. We provide a full list of the lemmatized, gendered, animate nouns in App. A. We use α ∈{0, 10−5, 10−4, 0.001, 0.01} and β ∈ {10−5, 10−4, 0.001, 0.01, 0.1, 1, 10, 100}; when we report results below, we use parameter values averaged over these hyperparameter settings. 4.1 Q1: Qualitative Differences Our first research question concerns the qualitative differences between the language used to describe men and women. To answer this question, we use our model to extract ranked lists of neighbors that are used, with particular sentiments, to describe male and female nouns. As explained in §3, we rank the neighbors by their deviations from the background distribution (see, for example, eq. (5)). Qualitative evaluation. In Tab. 2, we provide, for each sentiment, the 25 largest-deviation adjectives used to describe male and female nouns. The 1711 τMASC-POS τMASC-NEG τMASC-NEU τFEM-POS τFEM-NEG τFEM-NEU Adj. Value Adj. Value Adj. Value Adj. Value Adj. Value Adj. Value faithful 2.3 unjust 2.4 german 1.9 pretty 3.3 horrible 1.8 virgin 2.8 responsible 2.2 dumb 2.3 teutonic 0.8 fair 3.3 destructive 0.8 alleged 2.0 adventurous 1.9 violent 1.8 financial 2.6 beautiful 3.4 notorious 2.6 maiden 2.8 grand 2.6 weak 2.0 feudal 2.2 lovely 3.4 dreary 0.8 russian 1.9 worthy 2.2 evil 1.9 later 1.6 charming 3.1 ugly 3.2 fair 2.6 brave 2.1 stupid 1.6 austrian 1.2 sweet 2.7 weird 3.0 widowed 2.4 good 2.3 petty 2.4 feudatory 1.8 grand 2.6 harried 2.4 grand 2.1 normal 1.9 brutal 2.4 maternal 1.6 stately 3.8 diabetic 1.2 byzantine 2.6 ambitious 1.6 wicked 2.1 bavarian 1.5 attractive 3.3 discontented 0.5 fashionable 2.5 gallant 2.8 rebellious 2.1 negro 1.5 chaste 3.3 infected 2.8 aged 1.8 mighty 2.4 bad 1.9 paternal 1.4 virtuous 2.7 unmarried 2.8 topless 3.9 loyal 2.1 worthless 1.6 frankish 1.8 fertile 3.2 unequal 2.4 withered 2.9 valiant 2.8 hostile 1.9 welsh 1.7 delightful 2.9 widowed 2.4 colonial 2.8 courteous 2.6 careless 1.6 ecclesiastical 1.6 gentle 2.6 unhappy 2.4 diabetic 0.7 powerful 2.3 unsung 2.4 rural 1.4 privileged 1.4 horrid 2.2 burlesque 2.9 rational 2.1 abusive 1.5 persian 1.4 romantic 3.1 pitiful 0.8 blonde 2.9 supreme 1.9 financial 3.6 belted 1.4 enchanted 3.0 frightful 0.5 parisian 2.7 meritorious 1.5 feudal 2.5 swiss 1.3 kindly 3.2 artificial 3.2 clad 2.5 serene 1.4 false 2.3 finnish 1.1 elegant 2.8 sullen 3.1 female 2.3 godlike 2.3 feeble 1.9 national 2.2 dear 2.2 hysterical 2.8 oriental 2.2 noble 2.3 impotent 1.7 priestly 1.8 devoted 2.0 awful 2.6 ancient 1.7 rightful 1.9 dishonest 1.6 merovingian 1.6 beauteous 3.9 haughty 2.6 feminist 2.9 eager 1.9 ungrateful 1.5 capetian 1.4 sprightly 3.2 terrible 2.4 matronly 2.6 financial 3.3 unfaithful 2.6 prussian 1.4 beloved 2.5 damned 2.4 pretty 2.5 chivalrous 2.6 incompetent 1.7 racial 0.9 pleasant 1.8 topless 3.5 asiatic 2.0 Table 2: For each sentiment, we provide the largest-deviation adjectives used to describe male and female nouns. results are striking: it is immediately apparent that positive adjectives describing women are often related to their appearance (e.g., beautiful, fair, and pretty). Sociolinguistic studies of other corpora, such as British newspapers (Caldas-Coulthard and Moon, 2010), have also revealed this pattern. Adjectives relating to fertility, such as fertile and barren, are also more prevalent for women. We provide similar tables for verbs in App. D. Negative verbs describing men are often related to violence (e.g., murder, fight, kill, and threaten). Meanwhile, women are almost always the object of rape, which aligns with our knowledge of the world and supports the collocation of rape and girl found by Baker (2014). Broadly speaking, positive verbs describing men tend to connote virtuosity (e.g., gallant and inspire), while those describing women appear more trivial (e.g., sprightly, giggle, and kiss). Correlation with human judgments. To determine whether the output from our model accords with previous human judgements of gender stereotypes, we use the corpus of Williams and Bennett (1975), which consists of 63 adjectives annotated with (binary) gender stereotypes. We measure Spearman’s ρ between these annotations and the probabilities output by our model. We find a relatively strong positive correlation of ρ = 0.59 (p < 10−6), which indicates that the output from our model aligns with common gender stereotypes captured by previous human judgements. We also measure the correlation between continuous annotations of 300 adjectives from two follow-up studies (Williams and Best, 1990, 1977)5 and the probabilities output by our model. Here, the correlation is ρ = 0.33 (p < 10−8), and the binarized annotations agree with the output from our model for 64% of terms. We note that some of the disagreement is due to reporting bias (Gordon and Van Durme, 2013) in our corpus. For example, only men are described in our corpus as effeminate, although humans judge it to be a highly feminine adjective. 4.2 Q2: Quantitative differences Our second research question concerns the quantitative differences between the language used to describe men and women. To answer this question, we use two existing semantic resources—one for adjectives (Tsvetkov et al., 2014) and one for verbs (Miller et al., 1993)—to quantify the patterns revealed by our model. Again, we use our model to extract ranked lists of neighbors that are used, with particular sentiments, to describe male and female nouns. We consider only the 200 largest-deviation 5The studies consider the same set of words 20 years apart; we average their annotations, obtained from Garg et al. (2018). 1712 POS–BODY POS–MISC NEG–MOTION NEG–SPATIAL NEU–BEHAVIOR NEU–BODY NEU–FEELING NEU–SOCIAL 0.00 0.05 0.10 0.15 0.20 0.25 Masc Fem Figure 4: The frequency with which the 200 largestdeviation adjectives for each sentiment and gender correspond to each sense from Tsvetkov et al. (2014). neighbors for each sentiment and gender. This restriction allows us to perform an unpaired permutation test (Good, 2004) to determine whether there are significant differences between the language used to describe men and women. Adjective evaluation. Women are supposedly more often described using adjectives related to their bodies and emotions. For example, de Beauvoir (1953) writes that “from girlhood, women are socialized to live and experience their bodies as objects for another’s gaze...” Although studies of reasonably large corpora have found evidence to support this supposition (Norberg, 2016), none have done so at scale with statistical significance testing. We use the semantic resource of Tsvetkov et al. (2014), which categorizes adjectives into thirteen senses: BEHAVIOR, BODY, FEELING, MIND, etc. Specifically, each adjective has a distribution over senses, capturing how often the adjective corresponds to each sense. We analyze the largestdeviation adjectives for each sentiment and gender by computing the frequency with which these adjectives correspond to each sense. We depict these frequencies in Fig. 4. Specifically, we provide frequencies for the senses where, after Bonferroni correction, the differences between men and women are significant. We find that adjectives used to describe women are indeed more often related to their bodies and emotions than adjectives used to describe men. Verb evaluation. To evalaute verbs senses, we take the same approach as for adjectives. We use the semantic resource of Miller et al. (1993), which POS–CONTACT NEG–BODY NEG–COMM. 0.00 0.05 0.10 0.15 0.20 0.25 Masc Fem Figure 5: The frequency with which the 200 largestdeviation verbs for each sentiment and gender correspond to each sense from Miller et al. (1993). These results are only for the NSUBJ–verb pairs; there are no statistically significant differences for DOBJ–verb pairs. ADJ NSUBJ DOBJ MSC FEM MSC FEM MSC FEM POS 0.34 0.38 0.37 0.36 0.37 0.36 NEG 0.30 0.31 0.33 0.34 0.34 0.35 NEU 0.36 0.31 0.30 0.30 0.30 0.29 Table 3: The frequency with which the 200 largestdeviation neighbors for each gender correspond to each sentiment, obtained using a simplified version of our model and the lexicon of Hoyle et al. (2019). Significant differences (p < 0.05/3 under an unpaired permutation test with Bonferroni correction) are in bold. categorizes verbs into fifteen senses. Each verb has a distribution over senses, capturing how often the verb corresponds to each sense. We consider two cases: the NSUBJ–verb pairs and the DOBJ–verb pairs. Overall, there are fewer significant differences for verbs than there are for adjectives. There are no statistically significant differences for the DOBJ–verb pairs. We depict the results for the NSUBJ–verb pairs in Fig. 5. We find that verbs used to describe women are more often related to their bodies than verbs used to describe men. 4.3 Q3: Differences in sentiment Our final research question concerns the overall sentiment of the language used to describe men and women. To answer this question, we use a simplified version of our model, without the latent sentiment variables or the posterior regularizer. We are then able to use the combined sentiment lexicon of Hoyle et al. (2019) to analyze the largest-deviation 1713 neighbors for each gender by computing the frequency with which each neighbor corresponds to each sentiment. We report these frequencies in Tab. 3. We find that there is only one significant difference: adjectives used to describe men are more often neutral than those used to describe women. 5 Conclusion and Limitations We presented an experimental framework for quantitatively studying the ways in which the language used to describe men and women is different and, moreover, different in a positive or negative way. We introduced a generative latent-variable model that jointly represents adjective (or verb) choice, with its sentiment, given the natural gender of a head (or dependent) noun. Via our experiments, we found evidence in support of common gender stereotypes. For example, positive adjectives used to describe women are more often related to their bodies than adjectives used to describe men. Our study has a few limitations that we wish to highlight. First, we ignore demographics (e.g., age, gender, location) of the speaker, even though such demographics are likely influence word choice. Second, we ignore genre (e.g., news, romance) of the text, even though genre is also likely to influence the language used to describe men and women. In addition, depictions of men and women have certainly changed over the period covered by our corpus; indeed, Underwood et al. (2018) found evidence of such a change for fictional characters. In future work, we intend to conduct a diachronic analysis in English using the same corpus, in addition to a cross-linguistic study of gendered language. Acknowledgments We would like to thank the three anonymous ACL 2019 reviewers for their comments on the submitted version, as well as the anonymous reviewers of a previous submission. We would also like to thank Adina Williams and Eleanor Chodroff for their comments on versions of the manuscript. The last author would like to acknowledge a Facebook fellowship. References Paul Baker. 2005. Public discourses of gay men. Routledge. Paul Baker. 2013. Introduction: Virtual special issue of gender and language on corpus approaches. Gender and Language, 1(1). Paul Baker. 2014. Using corpora to analyze gender. A&C Black. David Bamman, Jacob Eisenstein, and Tyler Schnoebelen. 2014. Gender identity and lexical variation in social media. Journal of Sociolinguistics, 18(2):135–160. Simone de Beauvoir. 1953. The second sex. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3(Jan):993–1022. Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam T. Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems, pages 4349–4357. Carmen Rosa Caldas-Coulthard and Rosamund Moon. 2010. ‘Curvy, hunky, kinky’: Using corpora as tools for critical analysis. Discourse & Society, 21(2):99– 133. Jacob Eisenstein, Amr Ahmed, and Eric P Xing. 2011. Sparse Additive Generative Models of Text. page 8. Kuzman Ganchev, Jennifer Gillenwater, Ben Taskar, et al. 2010. Posterior regularization for structured latent variable models. Journal of Machine Learning Research, 11(Jul):2001–2049. Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635–E3644. Yoav Goldberg and Jon Orwant. 2013. A dataset of syntactic-ngrams over time from a very large corpus of English books. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 241–247, Atlanta, Georgia, USA. Association for Computational Linguistics. Phillip I. Good. 2004. Permutation, parametric, and bootstrap tests of hypotheses. Jonathan Gordon and Benjamin Van Durme. 2013. Reporting Bias and Knowledge Acquisition. In Proceedings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC ’13, pages 25–30, New York, NY, USA. ACM. Alexander Hoyle, Lawrence Wolf-Sonkin, Hanna Wallach, Ryan Cotterell, and Isabelle Augenstein. 2019. Combining disparate sentiment lexica with a multiview variational autoencoder. In Proceedings of the 2019 Conference of the North American Chapter of 1714 the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR). Robin Lakoff. 1973. Language and woman’s place. Language in Society, 2(1):45–79. John P. McKee and Alex C. Sherriffs. 1957. The differential evaluation of males and females. Journal of Personality, 25(3):356–371. George A. Miller, Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A semantic concordance. In Proceedings of the workshop on Human Language Technology (HLT), pages 303–308. Association for Computational Linguistics. Moon, Rosamund. 2014. From gorgeous to grumpy: Adjectives, age, and gender. Gender and Language, 8(1):5–41. Cathrine Norberg. 2016. Naughty Boys and Sexy Girls: The Representation of Young Individuals in a WebBased Corpus of English. Journal of English Linguistics, 44(4):291–317. Cathy O’Neil. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books. Margaret Ott. 2016. Tweet like a girl: Corpus analysis of gendered language in social media. Michael Pearce. 2008. Investigating the collocational behaviour of man and woman in the BNC using Sketch Engine. Corpora, 3(1):1–29. Jeffrey Z. Rubin, Frank J. Provenzano, and Zella Luria. 1974. The eye of the beholder: Parents’ views on sex of newborns. American Journal of Orthopsychiatry, 44(4):512. Rachel Rudinger, Chandler May, and Benjamin Van Durme. 2017. Social bias in elicited natural language inferences. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 74–79. Rachel Rudinger, Aaron Steven White, and Benjamin Van Durme. 2018. Neural models of factuality. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 731–744. Association for Computational Linguistics. Alexandra Schofield and Leo Mehr. 2016. Genderdistinguishing features in film dialogue. In Proceedings of the Fifth Workshop on Computational Linguistics for Literature, pages 32–39, San Diego, California, USA. Association for Computational Linguistics. Daniel Storage, Zachary Horne, Andrei Cimpian, and Sarah-Jane Leslie. 2016. The frequency of “brilliant” and “genius” in teaching evaluations predicts the representation of women and African Americans across fields. PloS one, 11(3):e0150194. Yulia Tsvetkov, Nathan Schneider, Dirk Hovy, Archna Bhatia, Manaal Faruqui, and Chris Dyer. 2014. Augmenting English adjective senses with supersenses. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), Reykjavik, Iceland. European Language Resources Association (ELRA). Ted Underwood, David Bamman, and Sabrina Lee. 2018. The transformation of gender in englishlanguage fiction. John E. Williams and Susan M. Bennett. 1975. The definition of sex stereotypes via the adjective check list. Sex Roles, 1(4):327–337. John E. Williams and Deborah L. Best. 1977. Sex Stereotypes and Trait Favorability on the Adjective Check List. Educational and Psychological Measurement, 37(1):101–110. John E. Williams and Deborah L. Best. 1990. Measuring sex stereotypes: a multination study. Newbury Park, Calif. : Sage. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979–2989, Copenhagen, Denmark. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20. Association for Computational Linguistics. 1715 A List of Gendered, Animate Nouns Tab. 4 contains the full list of gendered, animate nouns that we use. We consider each row in this table to be the inflected forms of a single lemma. Male Female Singular Plural Singular Plural man men woman women boy boys girl girls father fathers mother mothers son sons daughter daughters brother brothers sister sisters husband husbands wife wives uncle uncles aunt aunts nephew nephews niece nieces emperor emperors empress empresses king kings queen queens prince princes princess princesses duke dukes duchess duchesses lord lords lady ladies knight knights dame dames waiter waiters waitress waitresses actor actors actress actresses god gods goddess goddesses policeman policemen policewoman policewomen postman postmen postwoman postwomen hero heros heroine heroines wizard wizards witch witches steward stewards stewardess stewardesses he – she – Table 4: Gendered, animate nouns. B Relationship to PMI Proposition 1. Consider the following restricted version of our model. Let fg ∈{0, 1}2 be a onehot vector that represents only the gender of a noun. We write g instead of n, equivalence-classing all nouns as either MASC or FEM. Let η⋆(·) : V →R2 be the maximum-likelihood estimate for the special case of our model without (latent) sentiments: p(ν | g) ∝exp(mν + f ⊤ g η⋆(ν)). (15) Then, we have τg(ν) ∝exp(PMI(ν, g)). (16) Proof. First, we note our model has enough parameters to fit the empirical distribution exactly: ˆp(ν | g) = p(ν | g) (17) ∝exp{mν + f ⊤ g η⋆(ν)}. (18) Then, we proceed with an algebraic manipulation of the definition of pointwise mutual information: PMI(ν, g) = log ˆp(ν, n) ˆp(ν) ˆp(n) (19) = log ˆp(ν | n) ˆp(ν) (20) = log p(ν | n) ˆp(ν) (21) = log p(ν | n) exp{mν} (22) = log 1 Z exp{mν + f ⊤ g η⋆(ν)} exp{mν} (23) = log 1 Z exp{f ⊤ g η⋆(ν)} (24) = f ⊤ g η⋆(ν) −log Z. (25) Now we have τg(ν) ∝exp{f⊤ g η⋆(ν)} (26) ∝exp{f⊤ g η⋆(ν) −log Z} (27) = exp(PMI(ν, g)), (28) which is what we wanted to show. C Senses In Tab. 5, we list the senses for adjectives (Tsvetkov et al., 2014) and for verbs (Miller et al., 1993). Adjectives Verbs Behavior Body Body Change Feeling Cognition Mind Communication Miscellaneous Competition Motion Consumption Perception Contact Quantity Creation Social Emotion Spatial Motion Substance Perception Temporal Possession Weather Social Stative Weather Table 5: Senses for adjectives and verbs. D Additional Results In Tab. 6 and Tab. 7, we provide the largestdeviation verbs used to describe male and female nouns for NSUBJ–verb pairs and DOBJ–verb pairs. 1716 τMASC-POS τMASC-NEG τMASC-NEU τFEM-POS τFEM-NEG τFEM-NEU Verb Value Verb Value Verb Value Verb Value Verb Value Verb Value succeed 1.6 fight 1.2 extend 0.7 celebrate 2.4 persecute 2.1 faint 0.7 protect 1.4 fail 1.0 found 0.8 fascinate 0.8 faint 1.0 be 1.1 favor 1.3 fear 1.0 strike 1.3 facilitate 0.7 fly 1.0 go 0.4 flourish 1.3 murder 1.5 own 1.1 marry 1.8 weep 2.3 find 0.1 prosper 1.7 shock 1.6 collect 1.1 smile 1.8 harm 2.2 fly 0.4 support 1.5 blind 1.6 set 0.8 fan 0.8 wear 2.0 fall 0.1 promise 1.5 forbid 1.5 wag 1.0 kiss 1.8 mourn 1.7 wear 0.9 welcome 1.5 kill 1.3 present 0.9 champion 2.2 gasp 1.1 leave 0.7 favour 1.2 protest 1.3 pretend 1.1 adore 2.0 fatigue 0.7 fell 0.1 clear 1.9 cheat 1.3 prostrate 1.1 dance 1.7 scold 1.8 vanish 1.3 reward 1.8 fake 0.8 want 0.9 laugh 1.6 scream 2.1 come 0.7 appeal 1.6 deprive 1.5 create 0.9 have 1.4 confess 1.7 fertilize 0.6 encourage 1.5 threaten 1.3 pay 1.1 play 1.0 get 0.5 flush 0.5 allow 1.5 frustrate 0.9 prompt 1.0 give 0.8 gossip 2.0 spin 1.6 respect 1.5 fright 0.9 brazen 1.0 like 1.8 worry 1.8 dress 1.4 comfort 1.4 temper 1.4 tarry 0.7 giggle 1.4 be 1.3 fill 0.2 treat 1.3 horrify 1.4 front 0.5 extol 0.6 fail 0.4 fee 0.2 brave 1.7 neglect 1.4 flush 0.3 compassionate 1.9 fight 0.4 extend 0.1 rescue 1.5 argue 1.3 reach 0.9 live 1.4 fake 0.3 sniff 1.6 win 1.5 denounce 1.3 escape 0.8 free 0.9 overrun 2.4 celebrate 1.1 warm 1.5 concern 1.2 gi 0.7 felicitate 0.6 hurt 1.8 clap 1.1 praise 1.4 expel 1.7 rush 0.6 mature 2.2 complain 1.7 appear 0.9 fit 1.4 dispute 1.5 duplicate 0.5 exalt 1.7 lament 1.5 gi 0.8 wish 1.4 obscure 1.4 incarnate 0.5 surpass 1.7 fertilize 0.5 have 0.5 grant 1.3 damn 1.4 freeze 0.5 meet 1.1 feign 0.5 front 0.5 Table 6: The largest-deviation verbs used to describe male and female nouns for NSUBJ–verb pairs. τMASC-POS τMASC-NEG τMASC-NEU τFEM-POS τFEM-NEG τFEM-NEU Verb Value Verb Value Verb Value Verb Value Verb Value Verb Value praise 1.7 fight 1.8 set 1.5 marry 2.3 forbid 1.3 have 1.0 thank 1.7 expel 1.8 pay 1.2 assure 3.4 shame 2.5 expose 0.8 succeed 1.7 fear 1.6 escape 0.4 escort 1.2 escort 1.3 escort 1.4 exalt 1.2 defeat 2.4 use 2.1 exclaim 1.0 exploit 0.9 pour 2.1 reward 1.8 fail 1.3 expel 0.9 play 2.7 drag 2.1 marry 1.3 commend 1.7 bribe 1.8 summon 1.7 pour 2.6 suffer 2.2 take 1.1 fit 1.4 kill 1.6 speak 1.3 create 2.0 shock 2.1 assure 1.6 glorify 2.0 deny 1.5 shop 2.6 have 1.8 fright 2.4 fertilize 1.6 honor 1.6 murder 1.7 excommunicate 1.3 fertilize 1.8 steal 2.0 ask 1.0 welcome 1.9 depose 2.3 direct 1.1 eye 0.9 insult 1.8 exclaim 0.6 gentle 1.8 summon 2.0 await 0.9 woo 3.3 fertilize 1.6 strut 2.3 inspire 1.7 order 1.9 equal 0.4 strut 3.1 violate 2.4 burn 1.7 enrich 1.7 denounce 1.7 appoint 1.7 kiss 2.6 tease 2.3 rear 1.5 uphold 1.5 deprive 1.6 animate 1.1 protect 2.1 terrify 2.1 feature 0.9 appease 1.5 mock 1.6 follow 0.7 win 2.0 persecute 2.1 visit 1.3 join 1.4 destroy 1.5 depose 1.8 excel 1.6 cry 1.8 saw 1.3 congratulate 1.3 deceive 1.7 want 1.1 treat 2.3 expose 1.3 exchange 0.8 extol 1.1 bore 1.6 reach 0.9 like 2.2 burn 2.6 shame 1.6 respect 1.7 bully 1.5 found 0.8 entertain 2.0 scare 2.0 fade 1.2 brave 1.7 enrage 1.4 exempt 0.4 espouse 1.4 frighten 1.8 signal 1.2 greet 1.6 shop 2.7 tip 1.8 feature 1.2 distract 2.3 see 1.2 restore 1.5 elect 2.2 elect 1.7 meet 2.2 weep 2.3 present 1.0 clear 1.5 compel 2.1 unmake 1.5 wish 1.9 scream 2.3 leave 0.8 excite 1.2 offend 1.5 fight 1.2 fondle 1.9 drown 2.1 espouse 1.3 flatter 0.9 scold 1.4 prevent 1.1 saw 1.8 rape 2.0 want 1.1 Table 7: The largest-deviation verbs used to describe male and female nouns for DOBJ–verb pairs.
2019
167
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1717–1726 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1717 Topic Sensitive Attention on Generic Corpora Corrects Sense Bias in Pretrained Embeddings Vihari Piratla∗ IIT Bombay Sunita Sarawagi IIT Bombay Soumen Chakrabarti IIT Bombay Abstract Given a small corpus DT pertaining to a limited set of focused topics, our goal is to train embeddings that accurately capture the sense of words in the topic in spite of the limited size of DT . These embeddings may be used in various tasks involving DT . A popular strategy in limited data settings is to adapt pretrained embeddings E trained on a large corpus. To correct for sense drift, fine-tuning, regularization, projection, and pivoting have been proposed recently. Among these, regularization informed by a word’s corpus frequency performed well, but we improve upon it using a new regularizer based on the stability of its cooccurrence with other words. However, a thorough comparison across ten topics, spanning three tasks, with standardized settings of hyper-parameters, reveals that even the best embedding adaptation strategies provide small gains beyond well-tuned baselines, which many earlier comparisons ignored. In a bold departure from adapting pretrained embeddings, we propose using DT to probe, attend to, and borrow fragments from any large, topic-rich source corpus (such as Wikipedia), which need not be the corpus used to pretrain embeddings. This step is made scalable and practical by suitable indexing. We reach the surprising conclusion that even limited corpus augmentation is more useful than adapting embeddings, which suggests that non-dominant sense information may be irrevocably obliterated from pretrained embeddings and cannot be salvaged by adaptation. 1 Introduction Word embeddings (Mikolov et al., 2013; Pennington et al., 2014) benefit many natural language processing (NLP) tasks. Often, a group of tasks may involve a limited corpus DT pertaining to a few focused topics, e.g., discussion boards on ∗[email protected] Physics, video games, or Unix, or a forum for discussing medical literature. Because DT may be too small to train word embeddings to sufficient quality, a prevalent practice is to harness general-purpose embeddings E pretrained on a broad-coverage corpus, not tailored to the topics of interest. The pretrained embeddings are sometimes used as-is (‘pinned’). Even if E is trained on a ‘universal’ corpus, considerable sense shift may exist in the meaning of polysemous words and their cooccurrences and similarities with other words. In a corpus about Unix, ‘cat’ and ‘print’ are more similar than in Wikipedia. ‘Charge’ and ‘potential’ are more related in a Physics corpus than in Wikipedia. Thus, pinning can lead to poor target task performance in case of serious sense mismatch. Another popular practice is to initialize the target embeddings to the pretrained vectors, but then “fine-tune” using DT to improve performance in the target (Mou et al., 2015; Min et al., 2017; Howard and Ruder, 2018). As we shall see, the number of epochs of fine-tuning is a sensitive knob — excessive fine-tuning might lead to “catastrophic forgetting” (Kirkpatrick et al., 2017) of useful word similarities in E, and too little finetuning may not adapt to target sense. Even if we are given development (‘dev’) sets for target tasks, the best balancing act between a pretrained E and a topic-focused DT is far from clear. Should we fine-tune (all word vectors) in epochs and stop when dev performance deteriorates? Or should we keep some words close to their pretrained embeddings (a form of regularization) and allow others to tune more aggressively? On what properties of E and DT should the regularization strength of each word depend? Our first contribution is a new measure of semantic drift of a word from E to DT , which can be used to control the regularization strength. In terms of perplexity, we show that this is superior to both epoch1718 based tuning, as well as regularization based on simple corpus frequencies of words (Yang et al., 2017). Yet another option is to learn projections to align generic embeddings to the target sense (Bollegala et al., 2015; Barnes et al., 2018; K Sarma et al., 2018), or to a shared common space (Yin and Sch¨utze, 2016; Coates and Bollegala, 2018; Bollegala and Bao, 2018) However, in carefully controlled experiments, none of the proposed approaches to adapting pretrained embeddings consistently beats the trivial baseline of discarding them and training afresh on DT ! Our second contribution is to explore other techniques beyond adapting generic embeddings E. Often, we might additionally have easy access to a broad corpus DS like Wikipedia. DS may span many diverse topics, while DT focuses on one or few, so there may be large overall drift from DS to DT too. However, a judicious subset c DS ⊂DS may exist that would be excellent for augmenting DT . The large size of DS is not a problem: we use an inverted index that we probe with documents from DT to efficiently identify c DS. Then we apply a novel perplexity-based joint loss over c DS ∪DT to fit adapted word embeddings. While most of recent research focus has been on designing better methods of adapting pretrained embeddings, we show that retraining with selected source text is significantly more accurate than the best of embeddings-only strategy, while runtime overheads are within practical limits. An important lesson is that non-dominant sense information may be irrevocably obliterated from generic embeddings; it may not be possible to salvage this information by post-facto adaptation. Summarizing, our contributions are: • We propose new formulations for training topicspecific embeddings on a limited target corpus DT by (1) adapting generic pre-trained word embeddings E, and/or (2) selecting from any available broad-coverage corpus DS. • We perform a systematic comparison of our and several recent methods on three tasks spanning ten topics and offer many insights. • Our selection of c DS from DS and joint perplexity minimization on c DS ∪DT perform better than pure embedding adaptation methods, at the (practical) cost of processing DS. • We evaluate our method even with contextual embeddings. The relative performance of the adaptation alternatives remain fairly stable whether the adapted embeddings are used on their own, or concatenated with contextsensitive embeddings (Peters et al., 2018; Cer et al., 2018). 2 Related work and baselines CBOW We review the popular CBOW model for learning unsupervised word representations (Mikolov et al., 2013). As we scan the corpus, we collect a focus word w and a set C of context words around it, with corresponding embedding vectors uuuw ∈Rn and vvvc ∈Rn, where c ∈C. The two embedding matrices UUU,VVV are estimated as: max UUU,VVV X ⟨w,C⟩∈D σ(uuuw · vvvC) + X ¯w∼D σ(−uuu ¯w · vvvC) (1) Here vvvC is the average of the context vectors in C. ¯w is a negative focus word sampled from a slightly distorted unigram distribution of D. Usually downstream applications use only the embedding matrix UUU, with each word vector scaled to unit length. Apart from CBOW, Mikolov et al. (2013) defined the related skipgram model, and (Pennington et al., 2014) proposed the Glove model, which can also be used in our framework. We found CBOW to work better for our downstream tasks. Src, Tgt and Concat baselines In the ‘Src’ option, pre-trained embeddings uuuS w trained only on a large corpus are used as-is. The other extreme, called ‘Tgt’, is to train word embeddings from scratch on the limited target corpus DT . In our experiments we found that Src performs much worse than Tgt, indicating the presence of significant drift in prominent word senses. Two other simple baselines, are ‘Concat’, that concatenates the source and target trained embeddings and let the downstream task figure out their relative roles, and ’Avg’ that following (Coates and Bollegala, 2018) takes their simple average. Another option is to let the downstream task learn to combine multiple embeddings as in (Zhang et al., 2016). As word embeddings have gained popularity for representing text in learning models, several methods have been proposed for enriching small datasets with pre-trained embeddings. Adapting pre-trained embeddings SrcTune: A popular method (Min et al., 2017; Wang et al., 2017; Howard and Ruder, 2018) is to use the source embeddings uuuS w to initialize uuuw 1719 and thereafter train on DT . We call this ‘SrcTune’. Fine-tuning requires careful control of the number of epochs with which we train on DT . Excessive training can wipe out any benefit of the source because of catastrophic forgetting. Insufficient training may not incorporate target corpus senses in case of polysemous words, and adversely affect target tasks (Mou et al., 2015). The number of epochs can be controlled using perplexity on a held-out DT , or using downstream tasks. Howard and Ruder (2018) propose to fine-tune a whole language model using careful differential learning rates. However, epoch-based termination may be inadequate. Different words may need diverse trade-offs between the source and target topics, which we discuss next. RegFreq (frequency-based regularization): Yang et al. (2017) proposed to train word embeddings using DT , but with a regularizer to prevent a word w’s embedding from drifting too far from the source embedding (uuuS w). The weight of the regularizer is meant to be inversely proportional to the concept drift of w across the two corpus. Their limitation was that corpus frequency was used as a surrogate for stability; high stability was awarded to only words frequent in both corpora. As a consequence, very few words in a focused DT about Physics will benefit from a broad coverage corpus like Wikipedia. Thousands of words like galactic, stars, motion, x-ray, and momentum will get low stability, although their prominent sense is the same in the two corpora. We propose a better regularization scheme in this paper. Unlike us, Yang et al. (2017) did not compare with fine-tuning. Projection-based methods attempt to project embeddings of one kind to another, or to a shared common space. Bollegala et al. (2014) and Barnes et al. (2018) proposed to learn a linear transformation between the source and target embeddings. Yin and Sch¨utze (2016) transform multiple embeddings to a common ‘meta-embedding’ space. Simple averaging are also shown to be effective (Coates and Bollegala, 2018), and a recent (Bollegala and Bao, 2018) auto-encoder based metaembedder (AEME) is the state of the art. K Sarma et al. (2018) proposed CCA to project both embeddings to a common sub-space. Some of these methods designate a subset of the overlapping words as pivots to bridge the target and source parameters in various ways (Blitzer et al., 2006; Ziser and Reichart, 2018; Bollegala et al., 2015). Many such techniques were proposed in a crossdomain setting, and specifically for the sentiment classification task. Gains are mainly from effective transfer of sentiment representation across domains. Our challenge arises when a corpus with broad topic coverage pretrains dominant word senses quite different from those needed by tasks associated with narrower topics. Language models for task transfer Complementary to the technique of adapting individual word embeddings is the design of deeper sequence models for task-to-task transfer. Cer et al. (2018); Subramanian et al. (2018) propose multi-granular transfer of sentence and word representations across tasks using Universal Sentence Encoders. ELMo (Peters et al., 2018) trains a multi-layer sequence model to build a contextsensitive representation of words in a sentence. ULMFiT (Howard and Ruder, 2018) present additional tricks such as gradual unfreezing of parameters layer-by-layer, and exponentially more aggressive fine-tuning toward output layers. Devlin et al. (2018) propose a deep bidirectional language model for generic contextual word embeddings. We show that our topic-sensitive embeddings provide additional benefit even when used with contextual embeddings. 3 Proposed approaches We explore two families of methods: (1) those that have access to only pretrained embeddings (Sec 3.1), and (2) those that also have access to a source corpus with broad topic coverage (Sec 3.2). 3.1 RegSense: Stability-based regularization Our first contribution is a more robust definition of stability to replace the frequency-based regularizer of RegFreq. We first train word vectors on DT , and assume the pretrained embeddings E are available. Let the focus embeddings of word w in E and DT be uuuS w and uuuT w. We overload E ∩DT as words that occur in both. For each word w ∈E ∩DT , we compute N(K) S (w, E ∩DT ), the K nearest neighbors of w with respect to the generic embeddings, i.e., with the largest values of cos(uuuS w,uuuS n) from E∩DT . Here K is a suitable hyperparameter. Now we define stability(w) = P n∈N(K) S (w,E∩DT ) cos(uuuT w,uuuT n) |N(K) S (w, E ∩DT )| (2) 1720 Intuitively, if we consider near neighbors n of w in terms of source embeddings, and most of these n’s have target embeddings very similar to the target embedding of w, then w is stable across E and DT , i.e., has low semantic drift from E to DT . While many other forms of stability can achieve the same ends, ours seems to be the first formulation that goes beyond mere word frequency and employs the topological stability of near-neighbors in the embedding space. Here is why this is important. Going from a generic corpus like Wikipedia to the very topic-focused StackExchange (Physics) corpus DT , the words x-ray, universe, kilometers, nucleons, absorbs, emits, sqrt, anode, diodes, and km/h have large stability per our definition above, but low stability according to Yang et al.’s frequency method since they are (relatively) rare in source. Using their method, therefore, these words will not benefit from reliable pretrained embeddings. Finally, the word regularization weight is: R(w) = max(0, tanh λ stability(w))  . (3) Here λ is a hyperparameter. R(w) above is a replacement for the regularizer used by Yang et al. (2017). If R(w) is large, it is regularized more heavily toward its source embedding, keeping uuuw closer to uuuS w. The modified CBOW loss is: max UUU,VVV X ⟨w,C⟩∈D σ(uuuw · vvvC) + X ¯w∼D σ(−uuu ¯w · vvvC) + X w R(w) ∥uuuw −uuuS w∥2 (4) Our R(w) performs better than Yang et al.’s. 3.2 Source selection and joint perplexity To appreciate the limitations of regularization, consider words like potential, charge, law, field, matter, medium, etc. These will get small stability (R(w)) values because their dominant senses in a universal corpus do not match with those in a Physics corpus (DT ), but DT may be too limited to wipe that dominant sense for a subset of words while preserving the meaning of stable words. However, there are plenty of high-quality broad-coverage sources like Wikipedia that includes plenty of Physics documents that could gainfully supplement DT . Therefore, we seek to include target-relevant documents from a generic source corpus DS, even if the dominant sense of a word in DS does not match that in DT . The goal is to do this without solving the harder problem of unsupervised, expensive and imperfect sense discovery in DS and sense tagging of DT , and using per-sense embeddings. The main steps of the proposed approach, SrcSel, are shown in Figure 1. Before describing the steps in detail, we note that preparing and probing a standard inverted index (Baeza-Yates and Ribeiro-Neto, 1999) are extremely fast, owing to decades of performance optimization. Also, index preparation can be amortized over multiple target tasks. (The granularity of a ‘document’ can be adjusted to the application.) 1: Index all source docs DS in a text retrieval engine. 2: Initialize a score accumulator as for each source doc s ∈DS. 3: for each target doc t ∈DT do 4: Get source docs most similar to t. 5: Augment their score accumulators. 6: c DS ←∅ 7: for each source doc s ∈DS do 8: if as is “sufficiently large” then 9: Add s to c DS. 10: Fit word embeddings to optimize a joint objective over c DS ∪DT . Figure 1: Main steps of SrcSel. Selecting source documents to retain: Let s ∈ DS, t ∈DT be source and target documents. Let sim(s, t) be the similarity between them, in terms of the TFIDF cosine score commonly used in Information Retrieval (Baeza-Yates and RibeiroNeto, 1999). The total vote of DT for s is then P t∈DT sim(s, t). We choose a suitable cutoff on this aggregate score, to reduce DS to c DS, as follows. Intuitively, if we hold out a randomly sampled part of DT , our cutoff should let through a large fraction (we used 90%) of the held-out part. Once we find such a cutoff, we apply it to DS and retain the source documents whose aggregate scores exceed the cutoff. Beyond mere selection, we design a joint perplexity objective over c DS ∪ DT , with a term for the amount of trust we place in a retained source document. This limits damage from less relevant source documents that slipped through the text retrieval filter. Since the retained documents are weighted based on their relevance to the topical target corpus DT , we found it beneficial to also include a percentage (we used 10%) of randomly selected documents from DS. We refer to the method that only uses documents retained 1721 using text retrieval filter as SrcSel:R and only randomly selected documents from DS as SrcSel:c. SrcSel uses documents both from the retrieval filter and random selection. Joint perplexity objective: Similar to Eqn. (1), we will sample word and context ⟨w, C⟩from DT and c DS. Given our limited trust in c DS, we will give each sample from c DS an alignment score Q(w, C). This should be large when w is used in a context similar to contexts in DT . We judge this based on the target embedding uuuT w: Q(w, C) = max  0, cos uuuT w,vvvT C  . (5) Since uuuw represents the sense of the word in the target, source contexts C which are similar will get a high score. Similarity in source embeddings is not used here because our intent is to preserve the target senses. We tried other forms such as dot-product or its exponential and chose the above form because it is bounded and hence less sensitive to gross noise in inputs. The word2vec objective (1) is enhanced to X ⟨w,C⟩∈DT h σ(uuuw · vvvC) + P ¯w∼DT σ(−uuu ¯w · vvvC) i + X ⟨w,C⟩∈c DS Q(w, C) h σ(uuuw · vvvC)+ P ¯w∼c DSσ(−uuu ¯w · vvvC) i . (6) The first sum is the regular word2vec loss over DT . Word ¯w is sampled from the vocabulary of DT as usual, according to a suitable distribution. The second sum is over the retained source documents c DS. Note that Q(w, C) is computed using the pre-trained target embeddings and does not change during the course of training. SrcSel+RegSense combo: Here we combine objective (6) with the regularization term in (4), where R uses all of E as in RegSense. 4 Experiments We compare the methods discussed thus far, with the goal of answering these research questions: 1. Can word-based regularization (RegFreq and RegSense) beat careful termination at epoch granularity, after initializing with source embeddings (SrcTune)? 2. How do these compare with just fusing Src and Tgt via recent meta-embedding methods like AAEME (Bollegala and Bao, 2018)1? 1We used the implementation available at: https://github.com/CongBao/AutoencodedMetaEmbedding 3. Does SrcSel provide sufficient and consistent gains over RegSense to justify the extra effort of processing a source corpus? 4. Do contextual embeddings obviate the need for adapting word embeddings? We also establish that initializing with source embeddings also improves regularization methods. (Curiously, RegFreq was never combined with source initialization.) Topics and tasks We compare across 15 topic-task pairs spanning 10 topics and 3 task types: an unsupervised language modeling task on five topics, a document classification task on six topics, and a duplicate question detection task on four topics. In our setting, DT covers a small subset of topics in DS, which is the 201609012 version dump of Wikipedia. Our tasks are different from GLUElike multi-task learning (Wang et al., 2019), because our focus is on the problems created by the divergence between prominent sense-dominated generic word embeddings and their sense in narrow target topics. We do not experiment on the cross-domain sentiment classification task popular in domain adaptation papers since they benefit more from sharing sentiment-bearing words, than learning the correct sense of polysemous words, which is our focus here. All our experiments are on public datasets, and we will publicly release our experiment scripts and code. StackExchange topics We pick four topics (Physics, Gaming, Android and Unix) from the CQADupStack3 dataset of questions and responses. For each topic, the available response text is divided into DT , used for training/adapting embeddings, and f DT , the evaluation fold used to measure perplexity. In each topic, the target corpus DT has 2000 responses totalling roughly 1 MB. We also report results with changing sizes of DT . Depending on the method we use DT , DS, or uuuS to train topic-specific embeddings and evaluate them as-is on two tasks that train task-specific layers on top of these fixed embeddings. The first is an unsupervised language modeling task where we train a LSTM4 on the adapted embed2The target corpora in our experiments came from datasets that were created before this time. 3http://nlp.cis.unimelb.edu.au/ resources/cqadupstack/ 4https://github.com/tensorflow/models/ blob/master/tutorials/rnn/ptb/ptb_word_ lm.py 1722 Method Physics Gaming Android Unix Tgt 121.9 185.0 142.7 159.5 Tgt(unpinned) -0.6 -0.8 0.2 0.1 Table 1: Average reduction in perplexity, when embeddings are not pinned, on four Stackexchange topics. dings (which are pinned) and report perplexity on f DT . The second is a Duplicate question detection task. Available in each topic are human annotated duplicate questions (statistics in Table 10 of Appendix) which we partition across train, test and dev as 50%, 40%, 10%. For contrastive training, we add four times as much randomly chosen non-duplicate pairs. The goal is to predict duplicate/not for a question pair, for which we use word mover distance (Kusner et al., 2015, WMD) over adapted word embeddings. We found WMD more accurate than BiMPM (Wang et al., 2017). We use three splits of the target corpus, and for each resultant embedding, measure AUC on three random (train-)dev-test splits of question pairs, for a total of nine runs. For reporting AUC, WMD does not need the train fold. Medical domain: This domain from the Ohsumed5 dataset has abstracts on cardiovascular diseases. We sample 1.4 MB of abstracts as target corpus DT . We evaluate embeddings on two tasks: (1) unsupervised language modeling on remaining abstracts, and (2) supervised classification on 23 MeSH classes based on title. We randomly select 10,000 titles with train, test, dev split as 50%, 40%, and 10%. Following Joulin et al. (2017), we train a softmax layer on the average of adapted (and pinned) word embeddings. Topics from 20 newsgroup We choose the five top-level classes in the 20 newsgroup dataset6 as topics; viz.: Computer, Recreation, Science, Politics, Religion. The corresponding five downstream tasks are text classification over the 3– 5 fine-grained classes under each top-level class. Train, test, dev splits were 50%, 40%, 10%. We average over nine splits. The body text is used as DT and subject text is used for classification. Pretrained embeddings E are trained on Wikipedia using the default settings of word2vec’s CBOW model. All our data splits are made publicly available at https://github.com/ vihari/we_adapt_datasets. 5https://www.mat.unical.it/OlexSuite/ Datasets/SampleDataSets-about.htm 6http://qwone.com/˜jason/20Newsgroups/ 4.1 Effect of fine-tuning embeddings on the target task We chose to pin embeddings in all our experiments, once adapted to the target corpus, namely the document classification task on medical and 20 newsgroup topics and language model task on five different topics. This is because we did not see any improvements when we unpin the input embeddings. We summarize in Table 1 the results when the embeddings are not pinned on language model task on the four StackExchange topics. 4.2 Epochs vs. regularization results In Figure 2 we show perplexity and AUC against training epochs. Here we focus on four methods: Tgt, SrcTune, RegFreq, and RegSense. First note that Tgt continues to improve on both perplexity and AUC metrics beyond five epochs (the default in word2vec code7 and left unchanged in RegFreq8 (Yang et al., 2017)). In contrast, SrcTune, RegSense, and RegFreq are much better than Tgt at five epochs, saturating quickly. With respect to perplexity, SrcTune starts getting worse around 20 iterations and becomes identical to Tgt, showing catastrophic forgetting. Regularizers in RegFreq and RegSense are able to reduce such forgetting, with RegSense being more effective than RegFreq. These experiments show that any comparison that chooses a fixed number of training epochs across all methods is likely to be unfair. Henceforth we will use a validation set for the stopping criteria. While this is standard practice for supervised tasks, most word embedding code we downloaded ran for a fixed number of epochs, making comparisons unreliable. We conclude that validation-based stopping is critical for fair evaluation. We next compare SrcTune, RegFreq, and RegSense on the three tasks: perplexity in Table 2, duplicate detection in Table 3, and classification in Table 4. All three methods are better than baselines Src and Concat, which are much worse than Tgt indicating the presence of significant concept drift. Yang et al. (2017) provided no comparison between RegFreq (their method) and SrcTune; we find the latter slightly better. On the supervised tasks, RegFreq is often worse than Tgt provided Tgt is allowed to train for enough epochs. 7https://code.google.com/archive/p/ word2vec/ 8https://github.com/Victor0118/cross_ domain_embedding/ 1723 20 40 60 80 100 105 110 115 120 125 Epochs (Android) LM Perplexity Tgt SrcTune RegFreq RegSense SrcSel:R 20 40 60 80 100 140 145 150 155 160 Epochs (Gaming) 50 100 150 75 80 85 90 Epochs (Physics) %AUC Tgt SrcTune RegFreq RegSense SrcSel:R 50 100 150 70 80 Epochs (Unix) Figure 2: Language model perplexity (top row) and AUC on duplicate question detection (bottom row). Method Physics Gaming Android Unix Med Tgt 121.9 185.0 142.7 159.5 158.9 SrcTune 2.3 6.8 1.1 3.1 5.5 RegFreq 2.1 7.1 1.8 3.4 6.8 RegSense 5.0 13.8 6.7 9.7 14.6 SrcSel 5.8 11.7 5.9 6.4 8.6 SrcSel 6.2 12.5 7.9 9.3 10.5 +RegSense Table 2: Average reduction in language model perplexity over Tgt on five topics. ± standard deviation are shown in Table 11 in the Appendix If the same number of epochs are used to train the two methods, one can reach the misleading conclusion that Tgt is worse. RegSense is better than SrcTune and RegFreq particularly with respect to perplexity, and rare class classification (Table 4). We conclude that a well-designed word stabilitybased regularizer can improve upon epoch-based fine-tuning. Impact of source initialization Table 5 compares Tgt and RegFreq with two initializers: (1) random as proposed by Yang et al. (2017), and (2) with source embeddings. RegFreq after source initialization is better in almost all cases. SrcSel and RegSense also improve with source initialization, but to a smaller extent. (More detailed numbers are in Table 14 of Appendix.) We conclude that initializing with pretrained embeddings is helpful even with regularizers. Physics Gaming Android Unix Tgt 86.7 82.6 86.8 85.4 Src -2.3±0.5 0.8±0.5 -3.7±0.5 -7.1±0.3 Concat -1.1±0.5 1.4±0.3 -2.1±0.3 -4.5±0.4 AAEME 1.2±0.2 4.6±0.0 -0.3±0.2 0.0±0.2 SrcTune -0.3±0.3 1.9±0.2 0.6±0.2 -0.0±0.2 RegFreq -0.4±0.2 2.4±0.2 -0.5±0.5 -0.5±0.2 RegSense -0.4±0.5 2.2±0.1 -0.5±0.5 -0.5±0.4 SrcSel 3.6±0.2 3.0±0.2 0.8±0.3 2.1±0.2 SrcSel 3.6±0.2 3.1±0.5 0.8±0.3 2.1±0.2 +RegSense Table 3: AUC gains over Tgt (± standard deviation of difference) on duplicate question detection task on various target topics. AAEME is the auto-encoder metaembedding of Bollegala and Bao (2018). Comparison with Meta-embeddings In Tables 3 and 4 we show results with the most recent meta-embedding method AAEME. AAEME provides gains over Tgt in only two out of six cases9. 4.3 Performance of SrcSel We next focus on the performance of SrcSel on all three tasks: perplexity in Table 2, duplicate detection in Table 3, and classification in Table 4. SrcSel is always among the best two methods for perplexity. In supervised tasks, SrcSel is 9On the topic classification datasets in Table 4, AAEME and its variant DAEME were worse than Src. We used the dev set to select the better of Src and their best method. 1724 Ohsumed 20NG Avg Method Micro Macro Rare 5 topics Tgt 26.3 14.7 3.0 88.9 Src -1.0±0.9 0.±0.5 0.±0.1 -3.9±1.2 AAEME -1.0±0.9 0.±0.5 0.±0.1 -3.9±1.2 SrcTune 1.7±1.0 1.8±1.7 1.5±2.0 0.0±1.6 RegFreq 0.6±0.5 1.8±2.3 3.7±4.7 RegSense 1.4±0.5 2.5±1.2 4.0±1.8 0.4±1.3 SrcSel 2.0±0.9 2.6±1.5 1.1±1.4 0.5±1.5 SrcSel 2.3±0.7 3.4±1.3 4.3±1.2 0.5±1.5 +RegSense Table 4: Average accuracy gains over Tgt (± std-dev) on Ohsumed and 20NG datasets. We show macro and rare class accuracy gains for Ohsumed because of its class population skew. Per-topic 20NG gains are in Table 15 in Appendix. Physics Gaming Android Unix RegFreq’s reduction in Perplexity over Tgt Original 1.1±1.1 1.5±1.2 0.9±0.1 0.7±0.8 +SrcInit 2.1±0.9 5.7±0.8 1.1±0.5 2.1±0.8 RegFreq’s gain in AUC over Tgt Original -1.2±0.4 0.1±0.1 -0.2±0.1 -0.4±0.1 +SrcInit -0.4±0.2 2.4±0.2 -0.5±0.5 -0.5±0.2 Table 5: Effect of initializing with source embeddings. We show mean gains over Tgt over 9 runs (± std-dev). the only method that provides significant gains for all topics: AUC for duplicate detection increases by 2.4%, and classification accuracy increases by 1.4% on average. SrcSel+RegSense performs even better than SrcSel on all three tasks particularly on rare words. An ablation study on other variants of SrcSel appear in the Appendix. Word-pair similarity improvements: In Table 6, we show normalized10 cosine similarity of word pairs pertaining to the Physics and Unix topics. Observe how word pairs like (nice, kill), (vim, emacs) in Unix and (current, electron), (lie, group) in Physics are brought closer together as a result of importing the larger unix/physics subset from DS. In each of these pairs, words (e.g. nice, vim, lie, current) have a different prominent sense in the source (Wikipedia). Hence, methods like SrcTune, and RegSense cannot help. In contrast, word pairs like (cost, require), (x-ray, x-rays) whose sense is the same in the two corpus benefit significantly from the source across all methods. 10We sample a set S of 20 words based on their frequency. Normalized similarity between a and b is cos(a,b) P w∈(S∪b) cos(a,w). Set S is fixed across methods. Pair Tgt Src Reg Reg Src Tune Freq Sense Sel Unix topic nice, kill 4.6 4.5 4.4 4.4 5.2 vim, emacs 5.7 5.8 5.7 5.8 6.4 print, cat 5.0 4.9 4.9 5.0 5.4 kill, job 5.2 5.1 5.2 5.3 5.8 make, install 5.1 5.1 5.3 5.7 5.8 character, unicode 4.9 5.1 4.7 4.6 5.8 Physics topic lie, group 5.2 5.0 4.4 5.1 5.8 current, electron 5.3 5.3 4.7 5.3 5.7 potential, kinetic 5.8 5.8 4.5 5.9 6.1 rotated, spinning 5.0 5.7 6.0 5.1 5.6 x-ray, x-rays 5.3 7.0 6.1 5.5 6.4 require, cost 4.9 6.2 5.2 5.1 5.3 cool, cooling 5.6 6.0 6.4 5.7 5.7 Table 6: Example word pairs and their normalized similarity across different methods of training embeddings. Running time: SrcSel is five times slower than RegFreq, which is still eminently practical. c DS was within 3× the size of DT in all domains. If DS is available, SrcSel is a practical and significantly more accurate option than adapting pretrained source embeddings. SrcSel+RegSense complements SrcSel on rare words, improves perplexity, and is never worse than SrcSel. Physic Game Andrd Unix Med(Rare) Tgt 89.7 88.4 89.4 89.2 9.4 SrcTune −0.2 0.6 −0.4 −0.2 −2.1 SrcSel 1.9 0.5 0.0 −0.2 1.1 Table 7: Performance with a larger target corpus size of 10MB on the four deduplication tasks (AUC score) and one classification task (Accuracy on rare class). Details in Table 16 of Appendix. Effect of target corpus size The problem of importing source embeddings is motivated only when target data is limited. When we increase target corpus 6-fold, the gains of SrcSel and SrcTune over Tgt was insignificant in most cases. However, infrequent classes continued to benefit from the source as shown in Table 7. 4.4 Contextual embeddings We explore if contextual word embeddings obviate the need for adapting source embeddings, in the ELMo (Peters et al., 2018) setting, a contextualized word representation model, pre-trained on a 5.5B token corpus11. We compare ELMo’s 11https://allennlp.org/elmo 1725 Physic Game Andrd Unix Med Tgt 86.7 82.6 86.8 85.4 26.3 ELMo −1.0 4.5 −1.5 −2.3 3.2 +Tgt −0.8 3.8 0.5 0.0 4.1 +SrcTune −0.5 3.0 0.3 0.2 3.5 +SrcSel 2.6 4.1 1.1 1.5 4.6 Table 8: Gains over Tgt with contextual embeddings on duplicate detection (columns 2–5) and classification (column 6). (Std-dev in Table 17 of Appendix.) contextual embeddings as-is, and also after concatenating them with each of Tgt, SrcTune, and SrcSel embeddings in Table 8. First, ELMo+Tgt is better than Tgt and ELMo individually. This shows that contextual embeddings are useful but they do not eliminate the need for topic-sensitive embeddings. Second, ELMo+SrcSel is better than ELMo+Tgt. Although SrcSel is trained on data that is a strict subset of ELMo, it is still instrumental in giving gains since that subset is aligned better with the target sense of words. We conclude that topic-adapted embeddings can be useful, even with ELMo-style contextual embeddings. Recently, BERT (Devlin et al., 2018) has garnered a lot of interest for beating contemporary contextual embeddings on all the GLUE tasks. We evaluate BERT on question duplicate question detection task on the four StackExchange topics. We use pre-trained BERT-base, a smaller 12-layer transformer network, for our experiments. We train a classification layer on the final pooled representation of the sentence pair given by BERT to obtain the binary label of whether they are duplicates. This is unlike the earlier setup where we used EMD on the fixed embeddings. To evaluate the utility of a relevant topic focused corpus, we fine-tune the pre-trained checkpoint either on DT (SrcTune) or on DT ∪c DS (SrcSel:R) using BERT’s masked language model loss. The classifier is then initialized with the fine-tuned checkpoint. Since fine-tuning is sensitive to the number of update steps, we tune the number of training steps using performance on a held-out dev set. F1 scores corresponding to different initializing checkpoints are shown in table 9. It is clear that pre-training the contextual embeddings on relevant target corpus helps in the downstream classification task. However, the gains of SrcSel:R over Tgt is not clear. This could be due to incomplete or noisy sentences in c DS. There is need for more experimentation and research to understand the limited gains of SrcSel:R over SrcTune in the case of Method Physics Gaming Android Unix BERT 87.5 85.3 87.4 82.7 SrcTune 88.0 89.2 88.5 83.5 SrcSel:R 87.9 88.4 88.6 85.1 Table 9: F1 scores on question de-duplication task using BERT-base and when fine-tuned on Tgt only (DT ) and Tgt and selected source (DT ∪c DS) BERT. We leave this for future work. 5 Conclusion We introduced one regularization and one sourceselection method for adapting word embeddings from a partly useful source corpus to a target topic. They work better than recent embedding transfer methods, and give benefits even with contextual embeddings. It may be of interest to extend these techniques to embed knowledge graph elements. Acknowledgment: Partly supported by an IBM AI Horizon grant. We thank all the anonymous reviewers for their constructive feedback. References Ricardo A. Baeza-Yates and Berthier Ribeiro-Neto. 1999. Modern Information Retrieval. AddisonWesley Longman Publishing Co., Inc., Boston, MA, USA. Jeremy Barnes, Roman Klinger, and Sabine Schulte im Walde. 2018. Projecting embeddings for domain adaptation: Joint modeling of sentiment analysis in diverse domains. In COLING. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 conference on empirical methods in natural language processing, pages 120–128. Association for Computational Linguistics. Danushka Bollegala and Cong Bao. 2018. Learning word meta-embeddings by autoencoding. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1650–1661. Danushka Bollegala, Takanori Maehara, and Ken-ichi Kawarabayashi. 2015. Unsupervised cross-domain word representation learning. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL. Danushka Bollegala, David J. Weir, and John A. Carroll. 2014. Learning to predict distributions of words across domains. In ACL. 1726 Daniel Cer et al. 2018. Universal sentence encoder. CoRR, abs/1803.11175. Joshua Coates and Danushka Bollegala. 2018. Frustratingly easy meta-embedding - computing metaembeddings by averaging source word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 194–198. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427–431. Association for Computational Linguistics. Prathusha K Sarma, Yingyu Liang, and Bill Sethares. 2018. Domain adapted word embeddings for improved sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521–3526. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In International Conference on Machine Learning, pages 957–966. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS Conference, pages 3111–3119. Sewon Min, Minjoon Seo, and Hannaneh Hajishirzi. 2017. Question answering through transfer learning from large fine-grained supervision data. arXiv preprint arXiv:1702.02171. Lili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2015. Discriminative neural sentence modeling by tree-based convolution. In EMNLP. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In EMNLP Conference, volume 14, pages 1532–1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In ACL Conference, volume 1, pages 2227–2237. Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christopher J. Pal. 2018. Learning general purpose distributed sentence representations via large scale multi-task learning. CoRR, abs/1804.00079. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. In ICLR. ArXiv 1804.07461. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. arXiv preprint arXiv:1702.03814. Wei Yang, Wei Lu, and Vincent Zheng. 2017. A simple regularization-based algorithm for learning crossdomain word embeddings. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2898–2904. Wenpeng Yin and Hinrich Sch¨utze. 2016. Learning word meta-embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1351–1360. Ye Zhang, Stephen Roller, and Byron C. Wallace. 2016. Mgnc-cnn: A simple approach to exploiting multiple word embeddings for sentence classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Yftah Ziser and Roi Reichart. 2018. Pivot based language modeling for improved neural domain adaptation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers).
2019
168
SphereRE: Distinguishing Lexical Relations with Hyperspherical Relation Embeddings Chengyu Wang1, Xiaofeng He1∗, Aoying Zhou2 1School of Computer Science and Software Engineering, East China Normal University 2School of Data Science and Engineering, East China Normal University [email protected], [email protected] [email protected] Abstract Lexical relations describe how meanings of terms relate to each other. Typical relations include hypernymy, synonymy, meronymy, etc. Automatic distinction of lexical relations is vital for NLP applications, and is also challenging due to the lack of contextual signals to discriminate between such relations. In this work, we present a neural representation learning model to distinguish lexical relations among term pairs based on Hyperspherical Relation Embeddings (SphereRE). Rather than learning embeddings for individual terms, the model learns representations of relation triples by mapping them to the hyperspherical embedding space, where relation triples of different lexical relations are well separated. We further introduce a Monte-Carlo based sampling and learning algorithm to train the model via transductive learning. Experiments over several benchmarks confirm SphereRE outperforms state-of-the-arts. 1 Introduction Lexical relations are relations between terms in lexicons. Types of lexical relations include hypernymy, synonymy, meronymy, etc. Such relations are treated as key resources for various NLP applications, e.g., question answering (Yang et al., 2017), taxonomy induction (Shen et al., 2018), machine translation (Zhang et al., 2018), natural language inference (Inkpen et al., 2018), lexical database construction (Speer et al., 2017), etc. Due to its importance, automatic acquisition of lexical relations is a research focus in NLP. In early years, lexical relations in WordNet were manually compiled by linguists (Miller, 1995). Recently, path-based and distributional approaches are two major paradigms to classify a term pair into a fixed inventory of lexical relations, or to predict it as random (meaning the ∗Corresponding author. two terms are un-related) (Shwartz and Dagan, 2016; Wang et al., 2017a). Path-based approaches use dependency paths connecting two terms to infer lexical relations (Washio and Kato, 2018a; Roller et al., 2018). The paths usually describe relations between terms explicitly, but require the two terms co-occur in a sentence, leading to the “low coverage” problem. Apart from Hearst patterns (Hearst, 1992), there are few high-quality textual patterns to recognize lexical relations other than hypernymy. Distributional approaches consider the global contexts of terms to predict lexical relations using word embeddings (Baroni et al., 2012; Glavas and Vulic, 2018). They are reported to outperform several path-based approaches, but can suffer from the “lexical memorization” problem (Levy et al., 2015; Shwartz and Dagan, 2016). This is because some supervised distributional approaches learn properties of two terms separately, instead of how two terms relate to each other in the embedding space. (a) Term Embedding Space (b) Relation Embedding Space Car Auto Automobile Vehicle Engine Wheel Hypernymy Synonymy Meronymy (Car, Hypernymy, Vehicle) (Car, Meronymy, Engine) (Car, Meronymy, Wheel) (Car, Synonymy, Auto) (Car, Synonymy, Automobile) O O x x y y Figure 1: An example of hyperspherical learning w.r.t. the term car and three types of lexical relations. In this paper, we aim at improving distributional approaches by learning lexical relation representations in hyperspherical embedding space, named hyperSpherical Relation Embeddings (SphereRE). Consider the example w.r.t. car in Figure 1. Word embeddings of these terms are similar to each other due to their contextual similarity. Hence, embedding offsets of term pairs can not distinguish the three types of lexical relations well (i.e., hypernymy, synonymy and meronymy). Instead of learning individual term embeddings, we directly map all the relation triples to the hyperspherical embedding space such that different types of lexical relations have diverse embeddings in terms of angles. For example, the angle between embeddings of (car, hypernymy, vehicle) and (car, synonymy, auto) is large. In contrast, that of (car, synonymy, automobile) and (car, synonymy, auto) is small. As a result, different types of lexical relations can be distinguished. Moreover, by learning representations of lexical relation triples explicitly, our work addresses “lexical memorization” (Levy et al., 2015) from a distributional aspect. To learn SphereRE vectors for lexical relation triples, we minimize embedding distances of term pairs that are likely to share the same lexical relation in both labeled and unlabeled data, and maximize embedding distances of different lexical relations. The distances in the hyperspherical space are defined based on the angles of embeddings. In this work, we first propose a relation-aware semantic projection model to estimate probabilistic distributions of lexical relations over unlabeled data. The SphereRE vectors are efficiently learned by Monte-Carlo techniques by transductive learning. Finally, a neural network based classifier is trained using all the features to make the final predictions of lexical relations over all unlabeled data. We evaluate SphereRE over four benchmark datasets and the CogALex-V shared task (Santus et al., 2016a), and confirm that SphereRE is highly effective, outperforming state-of-the-art. We also evaluate the embedding quality of SphereRE. The rest of this paper is organized as follows. Section 2 summarizes the related work. We present SphereRE in Section 3. Experiments are illustrated in Section 4, with the conclusion shown in Section 5. 2 Related Work We briefly overview related work on lexical relation classification and hyperspherical learning. 2.1 Lexical Relation Classification Among all methods, path-based and distributional approaches are two major paradigms (Shwartz and Dagan, 2016). For hypernymy relations, Hearst patterns (Hearst, 1992) are lexical patterns frequently employed, summarized in Wang et al. (2017a). Shwartz et al. (2016) employ an LSTMbased neural network to learn representations of dependency paths. Roller et al. (2018) use Hearst pattern based statistics derived from a large text corpus to detect hypernymy relations. For other lexical relations, LexNET (Shwartz and Dagan, 2016) extends Shwartz et al. (2016) to classify multiple types of lexical relations based on an integrated neural network. This type of methods requires that the two terms co-occur in a sentence. Washio and Kato (2018a) address the “low coverage” issue by augmenting dependency paths. Distributional approaches employ term representations to predict lexical relations, which exploit global contexts of terms. Traditional methods use a combination of two terms’ embeddings as the representation, such as vector concatenation (Baroni et al., 2012; Roller and Erk, 2016), vector difference (Roller et al., 2014; Weeds et al., 2014; Vylomova et al., 2016), etc. After that, a classifier is trained to predict lexical relations. Although distributional methods do not require the co-occurrence of two terms, they suffer from “lexical memorization” (Levy et al., 2015). It means the algorithms only learn the properties of the two terms, rather than the relations between them. Recently, more complicated neural networks have been proposed. Glavas and Vulic (2018) propose a Specialization Tensor Model to discriminate between four lexical relations. The model learns different specializations of input distributional embeddings w.r.t. term pairs in order to predict different types of lexical relations. Attia et al. (2016) employ a convolutional neural network in a multitask setting. Nguyen et al. (2016, 2017b) distinguish antonymy and synonymy via word embeddings and path-based neural networks. Similar research is presented in Hashimoto et al. (2015); Washio and Kato (2018b); Chen et al. (2018); Bouraoui et al. (2018). A few works learn relation embeddings for other NLP applications (Jameel et al., 2018; Joshi et al., 2018). Another research direction is to learn specializing embeddings. Yu et al. (2015); Luu et al. (2016); Nguyen et al. (2017a); Vulic and Mrksic (2018) (and a few others) learn hypernymy embeddings considering hierarchical structure of hypernymy relations. For other lexical relations, Mrksic et al. (2017) present the model Attract-Repel to improve qualities of word embeddings for synonymy recognition. However, they focus on one particular lexical relation, not capable of distinguishing multiple types of lexical relations. 2.2 Hyperspherical Learning The work of hyperspherical learning is mostly in computer vision. Liu et al. (2017) propose a hperspherical network (SphereNet) for image classification. It learns angular representations on hyperspheres using hyperspherical convolution units. Wang et al. (2017c) apply the L2 hypersphere embedding technique to face verification, optimizing cosine similarity for feature normalization. In NLP, hyperspherical learning has not been extensively used. Masumura et al. (2017) introduce hyperspherical query likelihood models for information retrieval. Mei and Wang (2016) leverage hyperspherical clustering for document categorization. Lv et al. (2018) consider sphere representations as knowledge graph embeddings. To our knowledge, few methods employ hyperspherical learning to learn representations for NLP applications. In our work, we focus on lexical relation classification and present the SphereRE model to address this problem. 3 The SphereRE Model We introduce the Hyperspherical Relation Embedding (SphereRE) model in detail. 3.1 Learning Objective We start with some basic notations. Let D and U be the labeled and unlabeled sets, consisting of term pairs (xi, yi). Each pair (xi, yi) corresponds to a pre-defined lexical relation type ri ∈R.1 The task of our work is to predict the lexical relation type ri for each pair (xi, yi) ∈U based on D. Denote ⃗xi (or ⃗yi) as the embedding of word xi (or yi), pre-trained using any neural language models. For each lexical relation type rm ∈R, we learn a mapping function fm(⃗xi) that maps the relation subject xi to the relation object yi in the embedding space if xi and yi have the lexical relation type rm. Hence, we aim at minimizing the objective function Jf with I(·) as the indicator function: Jf = |D| X i=1 X rm∈R I(ri = rm)∥fm(⃗xi) −⃗yi∥2 1If the dataset contains term pairs of several lexical relation types, together with random, unrelated term pairs, we consider “random” as a special lexical relation type. To represent lexical relation triples in the (original) embedding space, we utilize the vector difference model (Roller et al., 2014; Weeds et al., 2014; Vylomova et al., 2016). Combining the model Jf, given a term pair (xi, yi) with lexical relation type ri, the representation of the triple is fi(⃗xi) −⃗xi. Next, we consider the hyperspherical learning objective. Based on the assumption in Figure 1, we define a symmetric function g(·, ·) to quantify the distance between two representations of lexical relation triples in the SphereRE space. Following Roller et al. (2014); Weeds et al. (2014); Vylomova et al. (2016), we employ the vector difference model to represent the relation embedding of a term pair. Because we aim at learning representations for both labeled and unlabeled data in order to make predictions, for two pairs (xi, yi) and (xj, yj) with lexical relation types ri and rj ((xi, yi), (xj, yj) ∈D ∪U), we minimize the following function: δ(ri, rj)g(fi(⃗xi) −⃗xi, fj(⃗xj) −⃗xj) where δ(ri, rj) is the sign function that returns 1 if the two pairs share the same lexical relation type (i.e., ri = rj) and -1 otherwise. Hence, embedding distances of term pairs that share the same lexical relation type are minimized. Embedding distances of term pairs with different lexical relation types are maximized. Refer to Figure 2 for a geometric interpretation of the objective. Car Auto Automobile Car Vehicle Wheel v(car, auto) v(car, automobile) Goal: Minimizing the angle Goal: Maximizing the angle v(car, wheel) v(car, vehicle) Case i) Same Lexical Relation Type Case ii) Different Lexical Relation Types Figure 2: A geometric interpretation of hyperspherical learning. “v(car, auto)” is the embedding vector w.r.t. the term pair “(auto, car)” (i.e., fi(⃗xi) −⃗xi) based on vector difference and Jf, characterizing the lexical relation of the two terms in the original embedding space. For simplicity, we use an arrow to represent the embedding fi(⃗xi) −⃗xi. In summary, the objective function of lexical relation representation learning in the hyperspherical embedding space Jg is defined as follows: Jg = D∪U X i,j δ(ri, rj)g(fi(⃗xi) −⃗xi, fj(⃗xj) −⃗xj) Let Θ be all parameters in the model. The general objective of SphereRE is defined as follows, with λ1 and λ2 as balancing hyperparameters: J(Θ) = Jf + λ1Jg + λ2∥Θ∥2 It is computationally intractable to minimize J(Θ). The reasons are twofold: i) The lexical relation types ri of all pairs (xi, yi) ∈U should be predicted before we can minimize J(Θ). ii) The definition of Jg does not directly determine how to generate the representations of lexical relation triples. Additionally, minimizing J(Θ) requires the traversal of D and U in quadratic time, leading to the high computational cost. In the following, we present a relation-aware semantic projection model as the function fm(·). It is employed to approximate ri (for all (xi, yi) ∈ U). Next, the representation learning process of lexical relation triples and the lexical relation classification algorithms are introduced in detail. 3.2 Relation-aware Semantic Projection For each pair (xi, yi) ∈U, we approximate ri from a probabilistic perspective, as an initial prediction step. Following Wang and He (2016); Yamane et al. (2016); Wang et al. (2017b), for each lexical relation type rm ∈R, we utilize a mapping matrix Mm ∈Rd×d as fm(⃗xi) where d is the dimension of pre-trained word embeddings. After adding a Tikhonov regularizer on Mm, the learning objective function Jm w.r.t. one specific lexical relation type rm ∈R over D can be re-written as follows: Jm = |D| X i=1 I(ri = rm)∥Mm⃗xi −⃗yi∥2 + µ∥Mm∥2 F Therefore, Jf = P rm∈R Jm. The minimization of Jm has a closed-form solution. The optimal solution M∗ m is as follows: M∗ m = arg min Mm Jm = (XT mXm + µE)−1XT mYm (1) where Xm and Ym are two nm × d data matrices, with nm being the number of term pairs that have the lexical relation type rm ∈R in D. The i-th rows of Xm and Ym are the embedding vectors of the i-th sample (xi, yi) ∈D that has the lexical relation type rm ∈R. E is a d×d identity matrix. For each lexical relation type rm ∈R, we train a semantic projection model based on Eq. (1). After that, a simple lexical relation prediction classifier is trained over D based on the following |R| × d-dimensional feature vector F(xi, yi):2 F(xi, yi) = (M1⃗xi −⃗yi) ⊕· · · ⊕(M|R|⃗xi −⃗yi) where ⊕is the vector concatenation operator. M1, · · · , M|R| are projection matrices w.r.t. |R| lexical relation types r1, · · · , r|R|. Based on Jm, if (xi, yi) has the lexical relation type rm, the norm of Mm⃗xi −⃗yi is likely to be small. On the contrary, the norms of Mn⃗xi − ⃗yi(1 ≤n ≤|R|, n ̸= m) are likely to be large. Therefore, the features are highly discriminative for lexical relation classification. For each pair (xi, yi) ∈U, the classifier outputs an |R|-dimensional probabilistic distribution over all lexical relation types R. In this work, we denote pi,m as the probability of (xi, yi) ∈U having the lexical relation type rm ∈R. 3.3 Relation Representation Learning After we have computed the probability pi,m for all (xi, yi) ∈U and all rm ∈R, we focus on the objective Jg. The goal is to learn a dr-dimensional vector ⃗ri for each (xi, yi) ∈D ∪U, regarded the representation of the lexical relation triple (named the SphereRE vector). To avoid the high complexity and the propagation effect of predicted errors, inspired by Perozzi et al. (2014); Grover and Leskovec (2016), we reformulate Jg and the function g(·, ·) via the Skipgram model (Mikolov et al., 2013a) over neighboring graphs. Let Nb(xi, yi) be the neighbors of a term pair (xi, yi) in the SphereRE space, where each term pair (xj, yj) ∈Nb(xi, yi) is likely to share the same lexical relation type as (xi, yi). To ensure that term pairs with the same lexical relation type have similar SphereRE vectors, the problem of optimizing Jg can be reformulated by maximizing the probability of predicting the neighbors of (xi, yi) given its SphereRE vector ⃗ri. Therefore, we define a new objective function J ′ g to re2In practice, we employ the multiclass logistic regression model as the underlying classifier. This is because it generates well calibrated probabilistic distributions, reflecting the model prediction confidence. In contrast, the outputs of more complicated models such as deep neural networks are not well calibrated. See Guo et al. (2017) for details. Condition Value of wi,j (xi, yi) ∈D, (xj, yj) ∈D, ri = rj 1 (xi, yi) ∈D, (xj, yj) ∈D, ri ̸= rj 0 (xi, yi) ∈D, (xj, yj) ∈U, ri = rm 1 2pj,m(cos(Mm⃗xi −⃗xi, Mm⃗xj −⃗xj) + 1) (xi, yi) ∈U, (xj, yj) ∈D, rj = rm 1 2pi,m(cos(Mm⃗xi −⃗xi, Mm⃗xj −⃗xj) + 1) (xi, yi) ∈U, (xj, yj) ∈U 1 2 P rm∈R pi,mpj,m · (cos(Mm⃗xi −⃗xi, Mm⃗xj −⃗xj) + 1) Table 1: The choice of wi,j according to different conditions. place Jg based on the negative log likelihood: J ′ g = − X (xi,yi)∈D∪U X (xj,yj)∈Nb(xi,yi) log Pr((xj, yj)|⃗ri) (2) A remaining problem is to define the neighborhood Nb(xi, yi) properly, to preserve the hyperspherical similarity property of the distance function g(fi(⃗xi) −⃗xi, fj(⃗xj) −⃗xj). In this work, we introduce a weight factor wi,j ∈[0, 1] w.r.t. two pairs (xi, yi) and (xj, yj) in D ∪U that quantifies the similarity between the two pairs in the SphereRE space. If (xi, yi) ∈D and (xj, yj) ∈ D, because the true lexical relation types are known, we simply have: wi,j = I(ri = rj). We continue to discuss other conditions. If i) (xi, yi) ∈D has the lexical relation type rm, and ii) the lexical relation type of (xj, yj) ∈U is unknown but is predicted to be rm with probability pj,m, the similarity between (xi, yi) and (xj, yj) in terms of angles is defined using the weighted cosine similarity function in the range of (0, 1): wi,j = 1 2pj,m(cos(Mm⃗xi −⃗xi, Mm⃗xj −⃗xj) + 1) A similar case holds for (xi, yi) ∈U and (xj, yj) ∈D. If (xi, yi) ∈U and (xj, yj) ∈U, because the lexical relation types of both pairs are unknown, we compute the weight wi,j by summing up all the weighted cosine similarities over all possible lexical relation types in R: wi,j =1 2 X rm∈R pi,mpj,m· (cos(Mm⃗xi −⃗xi, Mm⃗xj −⃗xj) + 1) Readers can also refer to Table 1 for a summarization of the choices of wi,j. To reduce computational complexity, we propose a Monte-Carlo based sampling and learning method to learn SphereRE vectors based on the values of wi,j. The algorithm is illustrated in Algorithm 1. It starts with the random initialization of SphereRE vector ⃗ri for each (xi, yi) ∈D ∪U. An iterative process randomly selects one pair (xi, yi) as the starting point. The next pair (xj, yj) is selected with probability as follows: Pr((xj, yj)|(xi, yi)) = wi,j P (x′ j,y′ j)∈Dmini wi,j′ (3) where Dmini is a mini-batch of term pairs randomly selected from D ∪U. In this way, the algorithm only needs to traverse |Dmini| pairs instead of |D| + |U| pairs. This process continues, resulting in a sequence of pairs, denoted as S: S = {(x1, y1), (x2, y2), · · · , (x|S|, y|S|)}. Denote l as the window size. We approximate J ′ g in Eq. (2) by −P (xi,yi)∈S Pi+l j=i−l(j̸=i) log Pr((xj, yj)|⃗ri) using the negative sampling training technique of the Skip-gram model (Mikolov et al., 2013a,b). The values of SphereRE vectors ⃗ri are continuously updated until all the iterations stop. We can see that ⃗ris are the low-dimensional representations of lexical relation triples, encoded in the hyperspherical space. The process is shown in Algorithm 1. Algorithm 1 SphereRE Learning 1: for each (xi, yi) ∈D ∪U do 2: Randomly initialize SphereRE vector ⃗ri; 3: end for 4: for i = 1 to max iteration do 5: Sample a sequence based on Eq. (3): S = {(x1, y1), (x2, y2), · · · , (x|S|, y|S|)}; 6: Update all SphereRE vectors ⃗ri by minimizing −P (xi,yi)∈S Pi+l j=i−l(j̸=i) log Pr((xj, yj)|⃗ri); 7: end for In practice, we find that there is a drawback of the sampling process. Because the predictions for all (xi, yi) ∈U are probabilistic, it leads to the situation where the algorithm prefers to choose term pairs in D to form the sequence S. The low sampling rate of U results in the poor representation learning quality of these pairs. Here, we employ a boosting approach to increase chances of (xi, yi) ∈U being selected based on stratified sampling. The values of all probabilities pi,m are multiplied by a factor γ > 1, i.e., pi,m ←pi,mγ. 3 3.4 Lexical Relation Classification Finally, we train a lexical relation classifier. For each pair (xi, yi) ∈D, we train a classifier over (|R| × d + dr)-dimensional feature set F∗(xi, yi): F∗(xi, yi) = F(xi, yi) ⊕⃗ri where F(xi, yi) are |R| × d-dimensional projection-based features. ⃗ri is the SphereRE vector of (xi, yi) that encodes the relation triple in the SphereRE space. We follow the work (Shwartz and Dagan, 2016) by using a fully-connected feed-forward neural network, shown in Figure 3. The input layer has |R| × d + dr nodes. We add only one hidden layer, followed by an |R|-dimensional output layer with softmax as the prediction function. The neural network is trained using the stochastic gradient descent algorithm, and is employed to predict the lexical relations for all (xi, yi) ∈U. The highlevel procedure is summarized in Algorithm 2. … … … …… … … SphereRE Vector Projection-based Features w.r.t. |R| Lexical Relation Types Hidden Layer Output Layer Figure 3: The neural network architecture. Algorithm 2 Lexical Relation Classification 1: for each lexical relation type rm ∈R do 2: Compute M ∗ m by Eq. (1); 3: end for 4: Train a classifier over D over F(xi, yi); 5: for each pair (xi, yi) ∈U do 6: Predict distribution pi,m by the classifier; 7: end for 8: Learning ⃗ri for all (xi, yi) ∈D ∪U by Algorithm 1; 9: Train a neural network over D by features F ∗(xi, yi); 10: for each pair (xi, yi) ∈U do 11: Predict the lexical relation ri by the neural network; 12: end for 3Note that although we do not explicitly optimize Jg or construct the SphereRE space directly, the SphereRE vectors learned by Algorithm 1 (i.e., ⃗ri) reflect the clear distinctions of triples with different lexical relation types. Further analysis of SphereRE vectors will be shown in experiments. 4 Experiments In this section, we conduct extensive experiments to evaluate SphereRE and compare it with stateof-the-art to make the convincing conclusion. 4.1 Datasets and Experimental Settings In the experiments, we train a fastText model (Bojanowski et al., 2017) over the English Wikipedia corpus to generate term embeddings. The dimensionality d is set to 300. To evaluate the effectiveness of SphereRE, we use four public datasets for multi-way classification of lexical relations: K&H+N (Necsulescu et al., 2015), BLESS (Baroni and Lenci, 2011), ROOT09 (Santus et al., 2016b) and EVALution (Santus et al., 2015). We also evaluate SphereRE over the subtask 2 of the CogALex-V shared task (Santus et al., 2016a). The statistics are summarized in Table 2. We follow the exact same experimental settings to partition the four public datasets into training, validation and testing sets as in (Shwartz and Dagan, 2016). The partition of the CogALex dataset is the same as those in the default settings of the CogALex-V shared task (Santus et al., 2016a). The default settings for SphereRE are as follows: µ = 0.001, dr = 300, |Dmini| = 20, |S| = 100, γ = 2 and l = 3. We run Algorithm 1 in 500 iterations. We also report how the changes of the neural network architecture and parameters affect the performance over the validation sets afterwards. It should be further noted that we do not set the values of λ1 and λ2 in the implementation because we employ sampling based techniques to learn ⃗ri, instead of directly optimizing J(Θ). 4.2 Experiments over Four Public Datasets We report the results of SphereRE and compare it with state-of-the-art over four public datasets. 4.2.1 General Performance To compare SphereRE with others, we consider following baselines: • Concat (Baroni et al., 2012), Diff (Weeds et al., 2014): They are classical distributional methods using vector concatenation and vector difference as features. A neural network without hidden layers is trained. • NPB (Shwartz et al., 2016): It uses a pathbased LSTM neural network to classify lexical relations. It is implemented by Shwartz Relation K&H+N BLESS ROOT09 EVALution CogALex Antonym 1,600 601 Attribute 2,731 1,297 Co-hyponym 25,796 3,565 3,200 Event 3,824 Holonym 544 Hypernym 4,292 1,337 3,190 1,880 637 Meronym 1,043 2,943 654 387 Random 26,378 12,146 6,372 5,287 Substance meronym 317 Synonym 1,086 402 All 57,509 26,546 12,762 7,378 7,314 Table 2: Statistics of all datasets. Relation names in all datasets have been mapped to relation names in WordNet. and Dagan (2016) and only considers dependency paths. • LexNET (Shwartz and Dagan, 2016): It is built upon Shwartz et al. (2016), which combines representations of dependency paths and word embeddings for classification. • Concath, Diffh, LexNETh: They are variants of Concat, Diff and LexNET, with one hidden layer between the input and the output layer. • NPB+Aug, LexNET+Aug (Washio and Kato, 2018a): They are variants of NPB and LexNET. The dependency paths used in the two original systems have been augmented in order to improve the pattern coverage. The results of SphereRE and all the baselines are summarized in Table 3. We compute the Precision, Recall and F1 score for each lexical relation, and report the average scores over all the relations, weighted by the support. We can see that classification distributional approaches perform worse than integrated neural networks (such as Shwartz and Dagan (2016)), because they are not capable of learning the true relations between terms. The proposed approach SphereRE consistently outperforms all the baselines over the four datasets in terms of F1 scores. When the type of lexical relations becomes larger (e.g., EVALution), the improvement of SphereRE are less significant than that of other datasets (e.g., BLESS, ROOT09). The most possible cause is that errors induced by relation-aware semantic projection are more likely to propagate to subsequent steps. 4.2.2 Study on Neural Network Architectures We adjust the neural network architecture (shown in Figure 3) and report the performance over the validation sets in Figure 4. As shown, adding more hidden layers does not improve the performance of lexical relation classification. In some datasets (e.g., EVALution), the performance even drops, indicating a sign of overfitting. We change the number of hidden nodes when we use one hidden layer in the network. The results show that the setting does not affect the performance greatly. 0 1 2 3 4 5 0.5 0.6 0.7 0.8 0.9 1.0 Number of hidden layers F1 Score K&H+N BLESS ROOT09 EVALution (a) Varying #hidden layers 100 200 300 400 500 0.6 0.7 0.8 0.9 1.0 Number of nodes in the hidden layer F1 Score K&H+N BLESS ROOT09 EVALution (b) Varying #nodes Figure 4: Network structure analysis. 4.2.3 Study on Monte-Carlo Sampling We continue to study how the settings of MonteCarlo sampling affect the quality of the SphereRE vectors. We adjust the number of iterations and the parameter γ. The performance is shown in Figure 5. As seen, more iterations contribute to the higher quality of embeddings. After a sufficient number of iterations (> 500), the performance becomes stable. As for the choice of γ, smaller values lead to the low sampling rates of unlabeled data, hence lower the prediction performance. In contrast, an overly large γ induces too many errors in relation-aware semantic projection to the sampling process. Hence, a balanced setting of γ is required. 200 400 600 800 1000 0.6 0.7 0.8 0.9 1.0 Number of iterations F1 Score K&H+N BLESS ROOT09 EVALution (a) Varying #iterations 1 2 3 4 5 0.5 0.6 0.7 0.8 0.9 1.0 γ F1 Score K&H+N BLESS ROOT09 EVALution (b) Varying γ Figure 5: MC sampling analysis. Method↓Dataset→ K&H+N BLESS ROOT09 EVALution Pre Rec F1 Pre Rec F1 Pre Rec F1 Pre Rec F1 Concat 0.909 0.906 0.904 0.811 0.812 0.811 0.636 0.675 0.646 0.531 0.544 0.525 Concath 0.983 0.984 0.983 0.891 0.889 0.889 0.712 0.721 0.716 0.57 0.573 0.571 Diff 0.888 0.886 0.885 0.801 0.803 0.802 0.627 0.655 0.638 0.521 0.531 0.528 Diffh 0.941 0.942 0.941 0.861 0.859 0.860 0.683 0.692 0.686 0.536 0.54 0.539 NPB 0.713 0.604 0.55 0.759 0.756 0.755 0.788 0.789 0.788 0.53 0.537 0.503 LexNET 0.985 0.986 0.985 0.894 0.893 0.893 0.813 0.814 0.813 0.601 0.607 0.6 LexNETh 0.984 0.985 0.984 0.895 0.892 0.893 0.812 0.816 0.814 0.589 0.587 0.583 NPB+Aug 0.897 0.842 0.778 0.489 LexNET+Aug 0.970 0.927 0.806 0.545 SphereRE 0.990 0.989 0.990 0.938 0.938 0.938 0.860 0.862 0.861 0.62 0.621 0.62 Improvement 0.5%↑ 1.1%↑ 4.7%↑ 2.0%↑ Table 3: Performance comparison of lexical relation classification over four public datasets. Features↓Dataset→ K&H+N BLESS ROOT09 EVALution w/o. SphereRE vectors 0.968 0.918 0.82 0.581 w. SphereRE vectors 0.990 0.938 0.861 0.62 Improvement +2.2% +2.0% +4.1% +3.9% Table 4: Feature analysis in terms of F1 score. Method↓Relation→ SYN ANT HYP MER All Attia et al. (2016) 0.204 0.448 0.491 0.497 0.423 Shwartz and Dagan (2016) 0.297 0.425 0.526 0.493 0.445 Glavas and Vulic (2018) 0.221 0.504 0.498 0.504 0.453 SphereRE 0.286 0.479 0.538 0.539 0.471 Table 5: Performance comparison over the CogALexV shared task. (Due to space limitation, we only list the performance of top systems in CogALex-V.) 4.2.4 Feature Analysis We further study whether adding the SphereRE vectors contributes to lexical relation classification. We remove all the these embeddings and use the rest of the features to make prediction based on the same neural architecture and parameter settings. The results are shown in Table 4. By learning the SphereRE vectors and adding them to the classifier, the performance improves in all four datasets. 4.3 Experiments over the CogALex-V Shared Task We evaluate SphereRE over the CogALex-V shared task (Santus et al., 2016a), where participants are asked to classify 4,260 term pairs into 5 lexical relations: synonymy, antonymy, hypernymy, meronymy and random. The training set contains 3,054 pairs. This task is the most challenging because i) it considers random relations as noise, discarding it from the averaged F1 score; ii) the training set is small; and iii) it enforces lexical spilt of the training and testing sets, disabling “lexical memorization” (Levy et al., 2015). In this shared task, GHHH (Attia et al., 2016) and LexNET (Shwartz and Dagan, 2016) are toptwo systems with the highest performance. The most recent work on CogALex-V is STM (Glavas and Vulic, 2018). SphereRE achieves the averaged F1 score of 47.1% (excluding the random relations), outperforming state-of-the-art. Additionally, as reported in previous studies, the “lexical memorization” effect (Levy et al., 2015) is rather severe for hypernymy relations. Although SphereRE is fully distributional, it achieves the highest F1 score of 53.8%. 4.4 Analysis of SphereRE Vector Qualities We conduct additional experiments to evaluate the qualities of Sphere vectors. The first set of experiments evaluates whether top-k most similar relation triples of a given relation triple share the same lexical relation type. This task is called topk similar lexical relation retrieval. In this task, the similarity between two relation triples is quantified by the cosine similarity of the two corresponding SphereRE vectors. The score is reported by Precision@k. Higher Precision@k scores indicate SphereRE vectors with better quality, because lexical relation triples with the same lexical relation type should have similar Sphere vectors. In the experiments, we compute the Precision@k over all the labeled (training) and unlabeled (testing) sets of all five datasets. The results are shown in Table 6 in terms of Average Precision@k (AP@k) (with k = 1, 5, 10). As seen, SphereRE has near perfect performance (over 95% for AP@1, over 90% for AP@5 and AP@10) over training sets of all five datasets. This is because in representation learning, all the labels (i.e., lexical relation types) of these term pairs are already known. Hence, SphereRE preserves distributional characteristics of these labeled datasets well. As for unlabeled datasets, the performance drops slightly over K&H+N, BLESS and ROOT09. The performance is not very satis(a) ROOT09 (Training) (b) ROOT09 (Testing) (c) EVALution (Training) (d) EVALution (Testing) Figure 6: Visualization of SphereRE vectors by t-SNE (Maaten and Hinton, 2008). Dataset AP@1 AP@5 AP@10 AP@1 AP@5 AP@10 Training Set Testing Set K&H+N 0.972 0.954 0.951 0.862 0.844 0.839 BLESS 0.962 0.950 0.948 0.868 0.830 0.825 ROOT09 0.987 0.993 0.989 0.814 0.789 0.828 EVALution 0.988 0.987 0.982 0.653 0.650 0.697 CogALex 0.953 0.904 0.918 0.631 0.628 0.649 Table 6: Performance of top-k similar relation retrieval over five datasets in terms of Average Precision@k. Term Pairs Predicted Relation True Relation (heart, courage) Random Synonym (wing, animal) Random Meronym (mint, pennyroyal) Random Hypernym (handlebar, bike) Co-hyponym Meronym (grenade, object) Attribute Hypernym Table 7: Cases of prediction errors. All the relation names are mapped to relation names in WordNet. factory over EVALution and CogALex, due to the internal challenges of lexical relation classification over the two datasets. This is because they contain a relatively large number of lexical relation types and random, unrelated term pairs. To have a more intuitive understanding of these learned SphereRE vectors, we plot the embeddings in Figure 6 by t-SNE (Maaten and Hinton, 2008). Due to space limitation, we only plot SphereRE vectors in part of the training and testing sets from ROOT09 and EVALution. For training data, we can see a clear separation of different lexical relation types. The slight “messiness” w.r.t. testing data indicates learning errors. 4.5 Error Analysis For error analysis, we randomly sample 300 cases of prediction errors and ask human annotators to analyze the most frequent causes. We present several cases in Table 7. The largest number of errors (approximately 42%) occur due to the random relations in K&H+N, BLESS, ROOT09 and CogALex. These relations are large in quantity and blurry in semantics, misleading the classifier to predict other lexical relations as random. Another large proportion of errors (about 31%) are related to unbalanced ratio of relations (apart from random). The number of some types of lexical relation triples in the training set is small (e.g., Meronym in EVALution, Synonym in CogALex). As a result, the representation learning w.r.t. these relation triples is relatively of lower quality. 5 Conclusion and Future Work In this paper, we present a representation learning model to distinguish lexical relations based on Hyperspherical Relation Embeddings (SphereRE). It learns representations of lexical relation triples by mapping them to the hyperspherical embedding space. The lexical relations between term pairs are predicted using neural networks over the learned embeddings. Experiments over four benchmark datasets and CogALex-V show SphereRE outperforms state-of-the-art methods. In the future, we will improve our model to deal with datasets containing a relatively large number of lexical relation types and random term pairs. Additionally, the mapping technique used for relation-aware semantic projection can be further improved to model different linguistic properties of lexical relations (e.g., the “one-to-many” mappings for meronymy). Acknowledgements This work is supported by the National Key Research and Development Program of China under Grant No. 2016YFB1000904. References Mohammed Attia, Suraj Maharjan, Younes Samih, Laura Kallmeyer, and Thamar Solorio. 2016. Cogalex-v shared task: GHHH - detecting semantic relations via word embeddings. In CogALex@COLING, pages 86–91. Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In EACL, pages 23–32. Marco Baroni and Alessandro Lenci. 2011. How we blessed distributional semantic evaluation. In GEMS, pages 1–10. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL, 5:135–146. Zied Bouraoui, Shoaib Jameel, and Steven Schockaert. 2018. Relation induction in word embeddings revisited. In COLING, pages 1627–1637. Hong-You Chen, Cheng-Syuan Lee, Keng-Te Liao, and Shou-de Lin. 2018. Word relation autoencoder for unseen hypernym extraction using word embeddings. In EMNLP, pages 4834–4839. Goran Glavas and Ivan Vulic. 2018. Discriminating between lexico-semantic relations with the specialization tensor model. In NAACL. Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In KDD, pages 855–864. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In ICML, pages 1321–1330. Kazuma Hashimoto, Pontus Stenetorp, Makoto Miwa, and Yoshimasa Tsuruoka. 2015. Task-oriented learning of word embeddings for semantic relation classification. In CoNLL, pages 268–278. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In COLING. Diana Inkpen, Xiaodan Zhu, Zhen-Hua Ling, Qian Chen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowledge. In ACL, pages 2406–2417. Shoaib Jameel, Zied Bouraoui, and Steven Schockaert. 2018. Unsupervised learning of distributional relation vectors. In ACL, pages 23–33. Mandar Joshi, Eunsol Choi, Omer Levy, Daniel S. Weld, and Luke Zettlemoyer. 2018. pair2vec: Compositional word-pair embeddings for cross-sentence inference. CoRR, abs/1810.08854. Omer Levy, Steffen Remus, Chris Biemann, and Ido Dagan. 2015. Do supervised distributional methods really learn lexical inference relations? In NAACL, pages 970–976. Weiyang Liu, Yan-Ming Zhang, Xingguo Li, Zhen Liu, Bo Dai, Tuo Zhao, and Le Song. 2017. Deep hyperspherical learning. In NIPS, pages 3953–3963. Anh Tuan Luu, Yi Tay, Siu Cheung Hui, and See-Kiong Ng. 2016. Learning term embeddings for taxonomic relation identification using dynamic weighting neural network. In EMNLP, pages 403–413. Xin Lv, Lei Hou, Juanzi Li, and Zhiyuan Liu. 2018. Differentiating concepts and instances for knowledge graph embedding. In EMNLP, pages 1971– 1979. Laurens Van Der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. JMLR, 9(2605):2579– 2605. Ryo Masumura, Taichi Asami, Hirokazu Masataki, Kugatsu Sadamitsu, Kyosuke Nishida, and Ryuichiro Higashinaka. 2017. Hyperspherical query likelihood models with word embeddings. In IJCNLP, pages 210–216. Jian-Ping Mei and Yangtao Wang. 2016. Hyperspherical fuzzy clustering for online document categorization. In FUZZ-IEEE, pages 1487–1493. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111–3119. George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39–41. Nikola Mrksic, Ivan Vulic, Diarmuid ´O S´eaghdha, Ira Leviant, Roi Reichart, Milica Gasic, Anna Korhonen, and Steve J. Young. 2017. Semantic specialization of distributional word vector spaces using monolingual and cross-lingual constraints. TACL, 5:309–324. Silvia Necsulescu, Sara Mendes, David Jurgens, N´uria Bel, and Roberto Navigli. 2015. Reading between the lines: Overcoming data sparsity for accurate classification of lexical relationships. In *SEM. Kim Anh Nguyen, Maximilian K¨oper, Sabine Schulte im Walde, and Ngoc Thang Vu. 2017a. Hierarchical embeddings for hypernymy detection and directionality. In EMNLP, pages 233–243. Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2016. Integrating distributional lexical contrast into word embeddings for antonymsynonym distinction. In ACL. Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2017b. Distinguishing antonyms and synonyms in a pattern-based neural network. In EAL, pages 76–85. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: online learning of social representations. In KDD, pages 701–710. Stephen Roller and Katrin Erk. 2016. Relations such as hypernymy: Identifying and exploiting hearst patterns in distributional vectors for lexical entailment. In EMNLP, pages 2163–2172. Stephen Roller, Katrin Erk, and Gemma Boleda. 2014. Inclusive yet selective: Supervised distributional hypernymy detection. In COLING, pages 1025–1036. Stephen Roller, Douwe Kiela, and Maximilian Nickel. 2018. Hearst patterns revisited: Automatic hypernym detection from large text corpora. In ACL, pages 358–363. Enrico Santus, Anna Gladkova, Stefan Evert, and Alessandro Lenci. 2016a. The cogalex-v shared task on the corpus-based identification of semantic relations. In CogALex@COLING, pages 69–79. Enrico Santus, Alessandro Lenci, Tin-Shing Chiu, Qin Lu, and Chu-Ren Huang. 2016b. Nine features in a random forest to learn taxonomical semantic relations. In LREC. Enrico Santus, Frances Yung, Alessandro Lenci, and Chu-Ren Huang. 2015. Evalution 1.0: an evolving semantic dataset for training and evaluation of distributional semantic models. In LDL@IJCNLP. Jiaming Shen, Zeqiu Wu, Dongming Lei, Chao Zhang, Xiang Ren, Michelle T. Vanni, Brian M. Sadler, and Jiawei Han. 2018. Hiexpan: Task-guided taxonomy construction by hierarchical tree expansion. In KDD, pages 2180–2189. Vered Shwartz and Ido Dagan. 2016. Path-based vs. distributional information in recognizing lexical semantic relations. In CogALex@COLING. Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. In ACL. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In AAAI, pages 4444–4451. Ivan Vulic and Nikola Mrksic. 2018. Specialising word vectors for lexical entailment. In NAACL, pages 1134–1145. Ekaterina Vylomova, Laura Rimell, Trevor Cohn, and Timothy Baldwin. 2016. Take and took, gaggle and goose, book and read: Evaluating the utility of vector differences for lexical relation learning. In ACL. Chengyu Wang and Xiaofeng He. 2016. Chinese hypernym-hyponym extraction from user generated categories. In COLING, pages 1350–1361. Chengyu Wang, Xiaofeng He, and Aoying Zhou. 2017a. A short survey on taxonomy learning from text corpora: Issues, resources and recent advances. In EMNLP, pages 1190–1203. Chengyu Wang, Junchi Yan, Aoying Zhou, and Xiaofeng He. 2017b. Transductive non-linear learning for chinese hypernym prediction. In ACL, pages 1394–1404. Feng Wang, Xiang Xiang, Jian Cheng, and Alan Loddon Yuille. 2017c. Normface: L2 hypersphere embedding for face verification. In ACM MM. Koki Washio and Tsuneaki Kato. 2018a. Filling missing paths: Modeling co-occurrences of word pairs and dependency paths for recognizing lexical semantic relations. In NAACL, pages 1123–1133. Koki Washio and Tsuneaki Kato. 2018b. Neural latent relational analysis to capture lexical semantic relations in a vector space. In EMNLP, pages 594–600. Julie Weeds, Daoud Clarke, Jeremy Reffin, David J. Weir, and Bill Keller. 2014. Learning to distinguish hypernyms and co-hyponyms. In COLING, pages 2249–2259. Josuke Yamane, Tomoya Takatani, Hitoshi Yamada, Makoto Miwa, and Yutaka Sasaki. 2016. Distributional hypernym generation by jointly learning clusters and projections. In COLING, pages 1871–1879. Shuo Yang, Lei Zou, Zhongyuan Wang, Jun Yan, and Ji-Rong Wen. 2017. Efficiently answering technical questions - A knowledge graph approach. In AAAI, pages 3111–3118. Zheng Yu, Haixun Wang, Xuemin Lin, and Min Wang. 2015. Learning term embeddings for hypernymy identification. In IJCAI, pages 1390–1397. Wen Zhang, Jiawei Hu, Yang Feng, and Qun Liu. 2018. Refining source representations with relation networks for neural machine translation. In COLING, pages 1292–1303.
2019
169
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 175–183 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 175 Unsupervised Pivot Translation for Distant Languages Yichong Leng∗ University of Science and Technology of China [email protected] Xu Tan Microsoft Research [email protected] Tao Qin† Microsoft Research [email protected] Xiang-Yang Li† University of Science and Technology of China [email protected] Tie-Yan Liu Microsoft Research [email protected] Abstract Unsupervised neural machine translation (NMT) has attracted a lot of attention recently. While state-of-the-art methods for unsupervised translation usually perform well between similar languages (e.g., EnglishGerman translation), they perform poorly between distant languages, because unsupervised alignment does not work well for distant languages. In this work, we introduce unsupervised pivot translation for distant languages, which translates a language to a distant language through multiple hops, and the unsupervised translation on each hop is relatively easier than the original direct translation. We propose a learning to route (LTR) method to choose the translation path between the source and target languages. LTR is trained on language pairs whose best translation path is available and is applied to the unseen language pairs for path selection. Experiments on 20 languages and 294 distant language pairs demonstrate the advantages of the unsupervised pivot translation for distant languages, as well as the effectiveness of the proposed LTR for path selection. Specifically, in the best case, LTR achieves an improvement of 5.58 BLEU points over the conventional direct unsupervised method. 1 Introduction Unsupervised neural machine translation (NMT) (Artetxe et al., 2017b; Lample et al., 2017, 2018), which uses only monolingual sentences for translation, is of great importance ∗The work was done when the first author was an intern at Microsoft Research Asia. † Corresponding author for zero-resource language pairs. Unsupervised translation relies on unsupervised cross-lingual word alignment or sentence alignment (Conneau et al., 2017; Artetxe et al., 2017a), where word embedding mapping (Artetxe et al., 2017b; Lample et al., 2017) and vocabulary sharing (Lample et al., 2018) are used for word alignment, and encoder/decoder weight sharing (Artetxe et al., 2017b; Lample et al., 2018) and adversarial training (Lample et al., 2017) are used for sentence alignment. Unsupervised cross-lingual alignment works reasonably well for a pair of similar languages, such as English-German or Portuguese-Galician, since they have similar lexica and syntax and share the same alphabets and language branch. However, the alignment between a pair of distant languages, which are not in the same language branch1, such as Danish-Galician is challenging. As a consequence, unsupervised translation between distant languages is usually of lower quality. For example, the unsupervised NMT model achieves 23.43 BLEU score on PortugueseGalician translation, while just 6.56 on DanishGalician translation according to our experiments. In this work, we focus on unsupervised translation of distant languages. We observe that two distant languages can be linked through multiple intermediate hops where unsupervised translation of two languages on each 1In this work, we use language branch to determine distant languages. We choose the taxonomy of language family provided by Ethnologue (Paul et al., 2009) (https://www.ethnologue.com/browse/families), which is one of the most authoritative and commonly accepted taxonomies. Distant languages can also be defined using other principles, which we leave to future work. 176 hop is easier than direct translation of the two distance languages, considering that the two languages on each intermediate hop are more similar, or the size of monolingual training data is larger. Therefore, we propose unsupervised pivot translation through multiple hops for distant languages, where each hop consists of unsupervised translation of a relatively easier language pair. For example, the distant language pair Danish→Galician can be translated by three easier hops: Danish→English, English→Spanish and Spanish→Galician. In this way, unsupervised pivot translation results in better accuracy (12.14 BLEU score) than direct unsupervised translation (6.56 BLEU score) from Danish to Galician in our experiments. The challenge of unsupervised pivot translation is how to choose a good translation path. Given a distant language pair X→Y, there exists a large amount of paths that can translate from X to Y2, and different paths may yield very different translation accuracies. Therefore, unsupervised pivot translation may result in lower accuracy than direct unsupervised translation if a poor path is chosen. How to choose a path with good translation accuracy is important to guarantee the performance of unsupervised pivot translation. A straightforward method is to calculate the translation accuracies of all possible paths on a validation set and choose the path with the best accuracy. However, it is computationally unaffordable due to the large amount of possible paths. For example, suppose we consider at most 3 hops (N = 3) and 100 languages (M = 100), and assume each path takes an average of 20 minutes to translate all the sentences in the validation set using one NVIDIA P100 GPU to get the BLEU score. Then it will take nearly 1400000 GPU days to evaluate all candidate paths. Even if we just consider 20 languages (M = 20), it will still take 2200 GPU days. Therefore, an efficient method for path selection is needed. We propose a learning to route (LTR) method that adopts a path accuracy predictor (a multi-layer LSTM) to select a good path for a distant language pair. Given a translation path and the translation accuracy of each hop on the path, the path accuracy predictor can predict the overall translation accuracy following this 2Suppose we only consider translation paths with at most N hops. Given M candidate intermediate languages, there are M! (M−N+1)! possible paths. path. Such a predictor is first trained on a training set of paths with known overall accuracy, and then used to predict the accuracy of a path unseen before. We conduct experiments on a large dataset with 20 languages and a total of 294 distant language pairs to verify the effectiveness of our method. Our proposed LTR achieves an improvement of more than 5 BLEU points on some language pairs. The contributions of this paper are as follows: (1) We introduce pivot translation into unsupervised NMT to improve the accuracy of distant languages. (2) We propose the learning to route (LTR) method to automatically select the good translation path. (3) Large scale experiments on more than 20 languages and 294 distant language pairs demonstrate the effectiveness of our method. 2 Related Work In this section, we review the related work from three aspects: unsupervised neural machine translation, pivot translation, and path routing. Unsupervised NMT As the foundation of unsupervised sentence translation, unsupervised word alignment has been investigated by (Conneau et al., 2017; Artetxe et al., 2017a), where linear embedding mapping and adversarial training are used to ensure the distribution-level matching, achieving considerable good accuracy or even surpasses the supervised counterpart for similar languages. Artetxe et al. (2017b); Lample et al. (2017) propose unsupervised NMT that leverages word translation for the initialization of the bilingual word embeddings. Yang et al. (2018) partially share the parameter of the encoder and decoder to enhance the semantic alignment between source and target language. Lample et al. (2018) further share the vocabulary of source and target languages and jointly learned the word embeddings to improve the quality of word alignment, and achieve large improvements on similar language pairs. Recently, inspired by the success of BERT (Devlin et al., 2018), Lample and Conneau (2019) leverage the BERT pre-training in the unsupervised NMT model and achieve state-of-the-art performance on some popular language pairs. Previous works on unsupervised NMT can indeed achieve good accuracy on similar language pairs, especially on the closely related languages such as English and German that are in the same language branch. In this circumstance, they can 177 simply share the vocabulary and learn joint BPE for source and target languages, and share the encoder and decoder, which is extremely helpful for word embedding and latent representation alignment. However, they usually achieve poor accuracy on distant languages that are not in the same language branch or do not share same alphabets. In this paper, we propose pivot translation for distant languages, and leverage the basic unsupervised NMT model in (Lample et al., 2018) on similar languages as the building blocks for the unsupervised pivot translation. Pivot Translation Pivot translation has long been studied in statistical machine translation to improve the accuracy of low/zero-resource translation (Wu and Wang, 2007; Utiyama and Isahara, 2007). Cheng et al. (2017); Chen et al. (2017) also adapt the pivot based method into neural machine translation. However, conventional pivot translation usually leverages a resource-rich language (mainly English) as the pivot to help the low/zeroresource translation, while our method only relies on the unsupervised method in each hop of the translation path. Due to the large amount of possible path choices, the accuracy drops quickly along the multiple translation hops in the unsupervised setting, unsupervised pivot translation may result in lower accuracy if the path is not carefully chosen. In this situation, path selection (path routing) will be critical to guarantee the performance of pivot translation. Path Routing Routing is the process of selecting a path for traffic in a network, or between or across multiple networks. Generally speaking, routing can be performed in many types of networks, including circuit switching network (Girard, 1990), computer networks (e.g., Internet) (Huitema, 2000), transportation networks (Raff, 1983) and social networks (LibenNowell et al., 2005). In this paper, we consider the routing of the translation among multiple languages, where the translation accuracy is the criterion for the path selection. Usually, the translation accuracy of the multi-hop path is not simply the linear combination of the accuracy on each onehop path, which makes it difficult to route for a good path. 3 Unsupervised Pivot Translation Observing that unsupervised translation is usually hard for distant languages, we split the direct translation into multiple hops, where the unsupervised translations on each hop is relatively easier than the original direct unsupervised translation. Formally, for the translation from language X to Y , we denote the pivot translation as X →Z1 →... →Zn →Y, (1) where Z1, ..., Zn are the pivot languages and n is the number of pivot languages in the path. We set n ∈{0, 1, 2} and consider 3-hop path at most in this paper, considering the computation cost and accuracy drop in longer translation path. Note that when n = 0, it is the one-hop (direct) translation. There exists a large amount of translation paths between X and Y and each path can result in different translation accuracies, or even lower than the direct unsupervised translation, due to the information loss along the multiple translation hops especially when unsupervised translation quality on one hop is low. Therefore, how to choose a good translation path is critical to ensure the accuracy of unsupervised pivot translation. In this section, we introduce the learning to route (LTR) method for the translation path selection. 3.1 Learning to Route In this section, we first give the description of the problem formulation, and then introduce the training data, features and model used for LTR. Problem Formulation We formulate the path selection as a translation accuracy prediction problem. The LTR model learns to predict the translation accuracy of each path from language X to Y given the translation accuracy of each hops in the path, and the path with the highest predicted translation accuracy among all the possible paths is chosen as the output. Training Data We construct the training data for the LTR model in the following steps: (1) From M languages, we choose the distant language pairs whose source and target languages are not in the same language branch. We then choose a small part of the distant language pairs as the development/test set respectively for LTR, and regard the remaining part as the training set for LTR. (2) In order to get the accuracy of different translation paths for the distant language pairs, 178 as well as to obtain the input features for LTR, we train the unsupervised model for the translation between any languages and obtain the BLEU score of each pair. For M languages, there are total M(M −1) language pairs and BLEU scores, which requires M(M −1)/2 unsupervised models since one model can handle both translation directions following the common practice in unsupervised NMT (Lample et al., 2018). (3) We then test the BLEU scores of the each possible translation path for the language pairs in the development and test sets, based on the models trained in previous steps. These BLEU scores are regarded as the ground-truth data to evaluate the performance of unsupervised pivot translation. (4) We just get the BLEU scores of a small part of the possible paths in the training set, which are used as the training data for LTR model3. We describe the features of the training data in the next paragraph. Features We extract several features from the paths in training data for better model training. Without loss of generality, we take a 3-hop path X →Z1 →Z2 →Y as an example, and regard it as a token sequence consisting of languages and one-hops: X, X →Z1, Z1, Z1 →Z2, Z2, Z2 →Y and Y . We consider two kinds of features for the token sequence: (1) The token ID. There are a total of 7 tokens in the path shown above. Each token ID is converted into trainable embeddings. For a one-hop token like Z1 →Z2, its embedding is simply the average of the two embeddings of Z1 and Z2. (2) The BLEU score of each language and one-hop, where we get the BLEU score of each language by averaging the accuracy of the one-hop path from or to this language. For example, the BLEU score of the target language Z1 in X →Z1 is calculated by averaging all the BLEU scores of the one-hop translation from other languages to Z1, while the BLEU score of the source language Z1 in Z1 →Z2 is calculated by averaging the BLEU scores of all the one-hop translation from Z1 to other languages. We concatenate the above two features together in one vector for each language and one-hop token, and get a sequence of features for each path. The BLEU score of the path will be used as the label for the LTR model. 3As described in Footnote 2, we cannot afford to test the BLEU scores of all the possible paths, so we just test a small part of them for training. Model We use a multi-layer LSTM model to predict the BLEU score of the translation path. The input of the LSTM model is the feature sequence described in the above paragraph. The last hidden of LSTM is multiplied with an onedimensional vector to predict the BLEU score of the path. 3.2 Discussions We make brief discussions on some possible baseline routing methods and compare them with our proposed LTR. Random Routing: We randomly choose a path as the routing result. Prior Pivoting: We set the pivot language for each language according to prior knowledge4. Denote PX and PY as the pivot language for X and Y respectively. The path X →PX →PY →Y will be chosen as the routing result by prior pivoting. Hop Average: The average of the BLEU scores of each one-hop in the path is taken as the predicted BLEU score for this path. We select the path with the highest predicted BLEU score, as used in the LTR method. Compared with these simple rule based routing methods described above, LTR chooses the path purely by learning on a part of the ground-truth paths. The feature we designed in LTR can capture the relationship between languages to determine the BLEU score and relative ranking of the paths. This data-driven learning based method (LTR) will be more accurate than the rule based methods. In the next section, we conduct experiments to verify effectiveness of our proposed LTR and compare with the baseline methods. 4 Experiments Design Our experiments consist of two stages in general. In the first stage, we need to train the unsupervised NMT model between any two languages to get the BLEU scores of each one-hop path. We also get the BLEU scores for a part of multi-hop paths through pivoting, which are used as the training and evaluation data for the second stage. In the second stage, we train the LTR model based on the training data generated in the first stage. In this section, we give brief descriptions of the experiment settings for the unsupervised NMT model 4For the languages in each language branch, we choose the language with the largest amount of monolingual data in this branch as the pivot language. All languages in the same language branch share the same pivot language. 179 training (the first stage) and the LTR model training and path routing (the second stage). 4.1 Experiment Setting for Direct Unsupervised NMT Datasets We conduct the experiments on 20 languages and a total of 20×19=380 language pairs, which have no bilingual sentence pairs but just monolingual sentences for each language. The languages involved in the experiments can be divided into 4 language branches by the taxonomy of language family: Balto-Slavic branch, Germanic branch, Italic branch and Uralic branch5. The language name and its ISO 639-1 code contained in each branch can be found in the supplementary materials (Section 1 and 2). We collect the monolingual corpus from Wikipedia for each language. We download the language specific Wikipedia contents in XML format6, and use WikiExtractor7 to extract and clean the texts. We then use the sentence tokenizer from the NLTK toolkit8 to generate segmented sentences from Wikipedia documents. To ensure we have the development and test set for the large amount of language pairs to evaluate the unsupervised translation accuracy in our experiments, we choose the languages that are covered by the common corpus of TED talks, which contains translations between more than 50 languages (Ye et al., 2018)9. In this circumstance, we can leverage the development and test set from TED talks for evaluation. Note that in the unsupervised setting, we just leverage monolingual sentences for unsupervised training and only use the bilingual data for developing and testing. In order to alleviate the domain mismatch problem that we train on monolingual data from Wikipedia but test on the evaluation data from TED talks, we also fine-tune the unsupervised models with the small size of monolingual data from TED talks10. The monolingual data from TED talks is merged with the monolingual data from Wikipedia in the 5The first three branches belong to Indo-European family while the last branch is actually a language family. We do not further split the 3 languages in Uralic family into different branches. 6For example, we download English Wikipedia contents from https://dumps.wikimedia.org/enwiki. 7https://github.com/attardi/wikiextractor 8https://www.nltk.org/ 9https://github.com/neulab/word-embeddings-for-nmt 10https://github.com/ajinkyakulkarni14/TEDMultilingual-Parallel-Corpus/tree/master/Monolingual data fine-tuning process, which results in better performance for the unsupervised translation. The size of Wikipidia and TED talks monolingual data can be found in the supplementary materials (Section 3). All the sentences in the bilingual and monolingual data are first tokenized with moses tokenizer11 and then segmented into subword symbols using Byte Pair Encoding (BPE) (Sennrich et al., 2016). When training the unsupervised model, we learn the BPE tokens with 60000 merge operations across the source and target languages for each language pair and jointly training the embedding using fastext12, following the practice in Lample et al. (2018). Model Configurations We use transformer model as the basic NMT model structure, considering it achieves state-of-the-art accuracy and becomes a popular choice for recent NMT research. We use 4-layer encoder and 4-layer decoder with model hidden size dmodel and feed-forward hidden size dff being 512, 2048 following the default configurations in Lample et al. (2018). We use the same model configurations for all the language pairs. Model Training and Inference We train the unsupervised model with 1 NVIDIA Tesla V100 GPU. One mini-batch contains roughly 4096 source tokens and 4096 target tokens, as used in Lample et al. (2018). We follow the default parameters of Adam optimizer (Kingma and Ba, 2014) and learning rate schedule in Vaswani et al. (2017). During inference, we decode with greedy search for all the languages. We evaluate the translation quality by tokenized case sensitive BLEU (Papineni et al., 2002) with multi-bleu.pl13. 4.2 Experiment Setting for Routing Configurations for Routing We choose the distant language pairs from the 20 languages based on the taxonomy of language family: if two languages are not in the same language branch, then they are regarded as distant languages. We get 294 distant language pairs. As described in Section 3.1, we choose nearly 5% and 10% of the distant language pairs as the development and test set 11https://github.com/moses-smt/mosesdecoder/blob/mast er/scripts/tokenizer/tokenizer.perl 12https://github.com/facebookresearch/fastText 13https://github.com/moses-smt/mosesdecoder/blob/ master/scripts/generic/multi-bleu.perl 180 Source Target DT GT GT(∆) Pivot-1 Pivot-2 LTR LTR(∆) Pivot-1 Pivot-2 Da Gl 6.56 12.14 5.58 En Es 12.14 5.58 En Es Bg Sv 4.72 9.92 5.20 En En 9.92 5.20 En En Gl Sv 3.79 8.62 4.83 Es En 8.62 4.83 Es En Sv Gl 3.70 8.13 4.43 En Es 8.13 4.43 En Es Be It 2.11 6.40 4.29 Uk En 5.24 3.13 En En Pt Be 4.76 8.86 4.10 Ru Ru 8.86 4.10 Ru Ru Gl Da 7.45 11.33 3.88 Es Es 11.33 3.88 Es Es Be Pt 6.39 9.77 3.38 Ru Ru 6.39 0.00 It Be 2.24 5.19 2.95 Pt Ru 4.64 2.40 Ru Ru Nl Uk 4.69 7.23 2.54 De De 7.12 2.53 Ru Ru Table 1: The BLEU scores of a part of the distant language pairs in the test set (Please refer to Section 1 and 4 in the supplementary materials for the corresponding full language name and full results). DT: direct unsupervised translation. GT: the ground-truth best path. LTR: the routing results of LTR. (∆) is the BLEU gap between GT or LTR and DT. Pivot-1 and Pivot-2 are two pivot languages in the path, which will be the same language if the path is 2-hop and will both be empty if the path is 1-hop (direct translation). Length 1-hop 2-hop 3-hop Ratio (%) 7.1 53.6 39.3 Table 2: The length distribution of the best translation paths. The ratio is calculated based on all language pairs in the test set. for routing. Note that if the language pair X →Y is in development (test) set, then the language pair Y →X will be also in development (test) set. We then enumerate all possible paths between any two language pairs in development and test set, and test the BLEU scores of the each possible path, which are regarded as the ground-truth data to evaluate the performance of the routing method. For the remaining 85% distant language pairs, we just test the BLEU score for 10% of all possible paths, and use these BLEU scores as the label for LTR model training. We use 2-layer LSTM as described in Section 3.1. The dimension of input feature vector is 6, which includes the embedding of the token ID with size of 5, the BLEU score with size 1 (we normalize the BLEU score into 0-1). We change the depth and width of LSTM, but there is no significant gain in performance. We use the mean square error as the training loss for the LTR model, and use Adam as the optimizer. The initial learning rate is 0.01. When applying the LTR model on unseen pairs, we predict the BLEU scores of all the possible paths (including 1-hop (direct translation), 2-hop and 3hop translation path) between the source and target languages, and choose the path with the highest predicted BLEU score as the routing result. Note that when predicting the path with LTR in inference time, we do not include the pivot language which occurs less than 10 times in training set, which can improve that stability of the LTR prediction. Methods for Comparison We conduct experimental comparisons on different methods described in Section 3 for path selection (routing), including Random Routing (RR), Prior Pivoting (PP), Hop Average (HA) and Learning to Route (LTR). We also compare these routing methods with the direct unsupervised translation, denoted as Direct Translation (DT). We list the BLEU score of the best multi-hop path (the ground truth) as a reference, which is denoted as Ground Truth (GT). 5 Results In this section, we introduce the performance of unsupervised pivot translation for distant languages. We first demonstrate the advantages of unsupervised pivot translation by comparing the best translation path (GT) with direction translation (DT), and then show the results of our proposed LTR. We also compare LTR with other routing methods (RR, PP and HA) to demonstrate its effectiveness. 5.1 The Advantage of Unsupervised Pivot Translation In order to demonstrate the advantage of unsupervised pivot translation for distant languages, we first analyze the distribution of the length of the best translation paths (GT), as shown in Table 2. The direction translation (1-hop) only takes a ratio of 7.1%, which means that a majority (92.9%) 181 Figure 1: The CDF of the BLEU scores for the distant language pairs in the test set. The green curve represents the direct unsupervised translation (DT), and the black curve represents the best translation path (GT). The other three curves represent the three routing methods for comparison: blue for hop average (HA), cyan for prior pivoting (PP) and red for our proposed learning to route (LTR). of the distant language pairs need multiple hops to improve the translation accuracy. We further compare the BLEU score of the best path (GT, which is also the upper-bound of different routing methods) with the direct unsupervised translation, and show the results for a part of distant languages pairs in Table 114. It can be seen that GT can largely outperform the direct translation DT with up to 5.58 BLEU points. We further plot the CDF of the BLEU scores on all the distant language pairs in the test set in Figure 1. It can be seen that the CDF curve of GT is always in the right part of DT, which means better accuracy and demonstrates the advantage of unsupervised pivot translation for distant languages. 5.2 Results of LTR Accuracy of LTR Model As our LTR selects the good path by ranking according to the predicted BLEU score, we first report the accuracy of selecting the best path. LTR can achieve 57% in terms of top-1 accuracy and 86% in terms of top-5 accuracy. Although the top-1 accuracy is not so high, it is acceptable because there exists some other route path just a little bit lower than the best path. We show the routing results of our LTR for some language pairs in Table 1. Take the Nl-Uk language pair in Table 1 as an example. The routing result of LTR for this pair does not match with GT, which 14Due to space limitation, we leave the full results of the distant language pairs in the test set in the supplementary materials (Section 4). Methods DT RR HA PP LTR GT BLEU 6.06 3.40 6.92 7.12 8.33 8.70 Table 3: The performance of different routing methods. The BLEU score is averaged on all the distant language pairs in the test set. The compared methods include: DT: direct unsupervised translation, RR: random routing, PP: prior pivoting, HA: hop average, LTR: our proposed learning to route, and GT: the best translation path (the ground truth). affects the top-1 accuracy. However, the BLEU gap between our selected path and the best path is as small as 0.09, which has little influence on the BLEU score of the selected path. Our further analysis in the next paragraph shows that the averaged BLEU score that LTR achieved in test set is close to that of GT. BLEU Score of Selected Path We further report the BLEU score of the translation path selected by LTR as well as other routing methods in Table 3, where the BLEU score is averaged over all the distant language pairs in the test set. It can be seen that compared with direct unsupervised translation (DT) which achieves 6.06 averaged BLEU score15, our LTR can achieve 2.27 BLEU points improvement on average, and is just 0.37 points lower than the ground truth best path (GT). The small gap between the ground truth and LTR demonstrates that although LTR fails to select the best path in 43% of the distant pairs (just 57% in terms of top-1 accuracy), it indeed chooses the path which has a BLEU score slightly lower than the best path. Random routing (RR) even performs worse than DT, demonstrating the routing problem is non-trivial. LTR outperforms PP and HA by more than 1 BLEU point on average. We also show the CDF of the BLEU scores of different methods in Figure 1, which clearly shows that LTR can outperform the PP and HA routing methods, demonstrating the effectiveness of the proposed LTR. 5.3 Extension to Supervised Pivoting In the previous experiments, we rely purely on unsupervised NMT for pivot translation, assuming that the translation on each hop cannot leverage any bilingual sentence pairs. However, there in15The averaged BLEU score seems not high, because the unsupervised translations between some hard languages in the test set obtain really low BLEU scores, which affects the average score. 182 Source Target DT GT-unsup GT-sup ∆ Source Target DT GT-unsup GT-sup ∆ Da Gl 6.56 12.14 15.20 8.64 Pt Be 4.76 8.86 13.03 8.27 Bg Sv 4.72 9.92 9.92 5.20 Gl Da 7.45 11.33 15.52 8.07 Gl Sv 3.79 8.62 9.58 5.79 Be Pt 6.39 9.77 14.50 8.11 Sv Gl 3.70 8.13 9.38 5.68 It Be 2.24 5.19 8.60 6.36 Be It 2.11 6.40 9.26 7.15 Nl Uk 4.69 7.23 8.07 3.38 Table 4: The BLEU scores of the same language pairs as shown in Table 1 (Please refer to Section 5 in the supplementary materials for the full results of the test set). GT-sup and GT-unsup represent the ground-truth best path with and without supervised pivoting. ∆is the BLEU gap between GT-sup and DT. Methods DT RR HA PP LTR GT BLEU 6.06 3.46 7.07 8.84 9.45 9.79 Table 5: The performance of different routing methods when enhanced with supervised pivoting. The BLEU score is averaged on all the distant language pairs in the test set. The compared methods include: DT: direct unsupervised translation, RR: random routing, HA: hop average, PP: prior pivoting, LTR: our proposed learning to route, and GT: the best translation path (the ground truth). deed exist plenty of bilingual sentence pairs between some languages, especially among the popular languages of the world, such as the official languages of the United Nations and the European Union. If we can rely on some supervised hop in the translation path, the accuracy of the translation for distant languages would be greatly improved. Take the translation from Danish to Galician as an example. The BLEU score of the direct unsupervised translation is 6.56, while the ground-truth best unsupervised path (Danish→ English→Spanish→Galician) can achieve a BLEU score of 12.14, 5.58 points higher than direct unsupervised translation. For the translation on the intermediate hop, i.e, English→Spanish, we have a lot of bilingual data to train a strong supervised translation model. If we replace the unsupervised English→Spanish translation with the supervised counterpart, the BLEU score of the path (Danish→English→Spanish→Galician) can improve from 12.14 to 15.2, with 8.64 points gain over the direct unsupervised translation. Note that the gain is achieved without leveraging any bilingual sentence pairs between Danish and Galician. Without loss of generality, we choose 6 popular languages (we select English, German, Spanish, French, Finish and Russian to cover each language branch we considered in this work) as the supervised pivot languages and replace the translations between these languages with the supervised counterparts. Note that we do not leverage any bilingual data related to the source language and target languages, and the supervised models are only used in the intermediate hop of a 3-hop path. For the bilingual sentence pairs between pivot languages, we choose the common corpus of TED talk which contains translations between multiple languages (Ye et al., 2018)16. Table 4 shows the performance improvements on the language pairs (the same pairs as shown in Table 1). When enhanced with supervised pivoting, we can achieve more than 8 BLEU points gain over DT on 4 language pairs, without using any bilingual data between the source language or target language. We also compare our proposed learning to route method LTR with RR, HA and PP, as showed in Table 5. We conduct the experiments on the original development and test set, but removing the language pairs whose source and target languages belong to the supervised pivot languages we choose. It can be seen that LTR can still outperform RR, HA and PP and be close to GT, demonstrating the effectiveness of LTR in the supervised pivoting setting. 6 Conclusions and Future Work In this paper, we have introduced unsupervised pivot translation for distant language pairs, and proposed the learning to route (LTR) method to automatically select a good translation path for a distant language pair. Experiments on 20 languages and totally 294 distant language pairs demonstrate that (1) unsupervised pivot translation achieves large improvements over direct unsupervised translation for distant languages; (2) our proposed LTR can select the translation path whose translation accuracy is close to the ground16This is the same dataset where we choose the development and test sets in Section 4.1. The data can be downloaded from https://github.com/neulab/word-embeddings-for-nmt. 183 truth best path; (3) if we leverage supervised translation instead of the unsupervised translation for some popular language pairs in the intermediate hop, we can further boost the performance of unsupervised pivot translation. For further works, we will leverage more supervised translation hops to improve the performance of unsupervised translation for distant languages. We will extend our method to more distant languages. References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017a. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 451–462. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2017b. Unsupervised neural machine translation. arXiv preprint arXiv:1710.11041. Yun Chen, Yang Liu, Yong Cheng, and Victor OK Li. 2017. A teacher-student framework for zeroresource neural machine translation. arXiv preprint arXiv:1705.00753. Yong Cheng, Qian Yang, Yang Liu, Maosong Sun, and Wei Xu. 2017. Joint training for pivot-based neural machine translation. In Proceedings of IJCAI. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Andre Girard. 1990. Routing and dimensioning in circuit-switched networks. Addison-Wesley Longman Publishing Co., Inc. Christian Huitema. 2000. Routing in the Internet. Prentice-Hall,. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. CoRR, abs/1901.07291. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 5039–5049. David Liben-Nowell, Jasmine Novak, Ravi Kumar, Prabhakar Raghavan, and Andrew Tomkins. 2005. Geographic routing in social networks. Proceedings of the National Academy of Sciences, 102(33):11623–11628. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA., pages 311–318. Lewis M Paul, Gary F Simons, Charles D Fennig, et al. 2009. Ethnologue: Languages of the world. Dallas, TX: SIL International. Available online at www. ethnologue. com/. Retrieved June, 19:2011. Samuel Raff. 1983. Routing and scheduling of vehicles and crews: The state of the art. Computers & Operations Research, 10(2):63–211. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Masao Utiyama and Hitoshi Isahara. 2007. A comparison of pivot methods for phrase-based statistical machine translation. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 484–491. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6000–6010. Hua Wu and Haifeng Wang. 2007. Pivot language approach for phrase-based statistical machine translation. Machine Translation, 21(3):165–181. Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Unsupervised neural machine translation with weight sharing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 1520, 2018, Volume 1: Long Papers, pages 46–55. Qi Ye, Sachan Devendra, Felix Matthieu, Padmanabhan Sarguna, and Neubig Graham. 2018. When and why are pre-trained word embeddings useful for neural machine translation. In HLT-NAACL.
2019
17
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1738–1750 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1738 Multilingual Factor Analysis Francisco Vargas, Kamen Brestnichki, Alex Papadopoulos-Korfiatis and Nils Hammerla Babylon Health {firstname.lastname, alex.papadopoulos}@babylonhealth.com Abstract In this work we approach the task of learning multilingual word representations in an offline manner by fitting a generative latent variable model to a multilingual dictionary. We model equivalent words in different languages as different views of the same word generated by a common latent variable representing their latent lexical meaning. We explore the task of alignment by querying the fitted model for multilingual embeddings achieving competitive results across a variety of tasks. The proposed model is robust to noise in the embedding space making it a suitable method for distributed representations learned from noisy corpora. 1 Introduction Popular approaches for multilingual alignment of word embeddings base themselves on the observation in (Mikolov et al., 2013a), which noticed that continuous word embedding spaces (Mikolov et al., 2013b; Pennington et al., 2014; Bojanowski et al., 2017; Joulin et al., 2017) exhibit similar structures across languages. This observation has led to multiple successful methods in which a direct linear mapping between the two spaces is learned through a least squares based objective (Mikolov et al., 2013a; Smith et al., 2017; Xing et al., 2015) using a paired bilingual dictionary. An alternate set of approaches based on Canonical Correlation Analysis (CCA) (Knapp, 1978) seek to project monolingual embeddings into a shared multilingual space (Faruqui and Dyer, 2014b; Lu et al., 2015). Both these methods aim to exploit the correlations between the monolingual vector spaces when projecting into the aligned multilingual space. The multilingual embeddings from (Faruqui and Dyer, 2014b; Lu et al., 2015) are shown to improve on word level semantic tasks, which sustains the authors’ claim that multilingual information enhances semantic spaces. In this paper we present a new non-iterative method based on variants of factor analysis (Browne, 1979; McDonald, 1970; Browne, 1980) for aligning monolingual representations into a multilingual space. Our generative modelling assumes that a single word translation pair is generated by an embedding representing the lexical meaning of the underlying concept. We achieve competitive results across a wide range of tasks compared to state-of-the-art methods, and we conjecture that our multilingual latent variable model has sound generative properties that match those of psycholinguistic theories of the bilingual mind (Weinreich, 1953). Furthermore, we show how our model extends to more than two languages within the generative framework which is something that previous alignment models are not naturally suited to, instead resorting to combining bilingual models with a pivot as in (Ammar et al., 2016). Additionally the general benefit of the probabilistic setup as discussed in (Tipping and Bishop, 1999) is that it offers the potential to extend the scope of conventional alignment methods to model and exploit linguistic structure more accurately. An example of such a benefit could be modelling how corresponding word translations can be generated by more than just a single latent concept. This assumption can be encoded by a mixture of Factor Analysers (Ghahramani et al., 1996) to model word polysemy in a similar fashion to (Athiwaratkun and Wilson, 2017), where mixtures of Gaussians are used to reflect the different meanings of a word. The main contribution of this work is the application of a well-studied graphical model to a novel domain, outperforming previous approaches on word and sentence-level translation retrieval 1739 y x z N Figure 1: Graphical model for alignment. Latent space z represents the aligned shared space between the two vector spaces x and y. tasks. We put the model through a battery of tests, showing it aligns embeddings across languages well, while retaining performance on monolingual word-level and sentence-level tasks. Finally, we apply a natural extension of this model to more languages in order to align three languages into a single common space. 2 Background Previous work on the topic of embedding alignment has assumed that alignment is a directed procedure — i.e. we want to align French to English embeddings. However, another approach would be to align both to a common latent space that is not necessarily the same as either of the original spaces. This motivates applying a well-studied latent variable model to this problem. 2.1 Factor Analysis Factor analysis (Spearman, 1904; Thurstone, 1931) is a technique originally developed in psychology to study the correlation of latent factors z ∈Rk on observed measurements x ∈Rd. Formally: p(z) = N(z; 0, I), p(x|z) = N(x; W z + µ, Ψ). In order to learn the parameters W , Ψ of the model we maximise the marginal likelihood p(x|W , Ψ) with respect to W , Ψ. The maximum likelihood estimates of these procedures can be used to obtain latent representations for a given observation Ep(z|x)[z]. Such projections have been found to be generalisations of principal component analysis (Pearson, 1901) as studied in (Tipping and Bishop, 1999). 2.2 Inter-Battery Factor Analysis Inter-Battery Factor Analysis (IBFA) (Tucker, 1958; Browne, 1979) is an extension of factor xv xj x1 z ... ... N . Figure 2: Graphical model for MBFA. Latent space z represents the aligned shared space between the multiple vector spaces {xj}v j=1. analysis that adapts it to two sets of variables x ∈Rd, y ∈Rd′ (i.e. embeddings of two languages). In this setting it is assumed that pairs of observations are generated by a shared latent variable z p(z) = N(z; 0, I), p(x|z) = N(x; Wxz + µx, Ψx), p(y|z) = N(y; Wyz + µy, Ψy). (1) As in traditional factor analysis, we seek to estimate the parameters that maximise the marginal likelihood arg max {Ψi,Wi} Y k p(x(k), y(k)|{Ψi, Wi}i), subject to Ψi ≻0, (W ⊤ i Wi) ≽0, (2) where the joint marginal p(xk, yk|{Ψi, Wi}i) is a Gaussian with the form N x y  ; µx µy  , ΣxxΣxy Σyx Σyy  , Σij = WiW ⊤ j + δijΨi, and Ψ ≻0 means Ψ is positive definite. Maximising the likelihood as in Equation 2 will find the optimal parameters for the generative process described in Figure 1 where one latent z is responsible for generating a pair x, y. This makes it a suitable objective for aligning the vector spaces of x, y in the latent space. In contrast to the discriminative directed methods in (Mikolov et al., 2013a; Smith et al., 2017; Xing et al., 2015), IBFA has the capacity to model noise. We can re-interpret the logarithm of Equation 2 1740 (as shown in Appendix D) as X k log p(x(k),y(k)|θ)=C+ X k (Ly|x k +Lx k), (3) Ly|x k = −1 2||˜y(k) −WyEp(z|x(k))[z]||2 Σy|x, Lx k = −1 2||˜x(k) −WxEp(z|x(k))[z]||2 ΨxΣ−1 x Ψx, C = −N 2 (log |2πΣy|x| + log |2πΣx|). The exact expression for Σy|x is given in the same appendix. This interpretation shows that for each pair of points, the objective is to minimise the reconstruction errors of x and y, given a projection into the latent space Ep(z|xk)[z]. By utilising the symmetry of Equation 2, we can show the converse is true as well — maximising the joint probability also minimises the reconstruction loss given the latent projections Ep(z|yk)[z]. Thus, this forces the latent embeddings of xk and yk to be close in the latent space. This provides intuition as to why embedding into this common latent space is a good alignment procedure. In (Browne, 1979; Bach and Jordan, 2005) it is shown that the maximum likelihood estimates for {Ψi, Wi} can be attained in closed form ˆ Wi = SiiUiP 1/2, ˆΨi = Sii −ˆ Wi ˆ W ⊤ i , ˆµx = ¯x, ˆµy = ¯y, where Sxx = 1 m m X i=1 ˜x(i) ˜x(i)⊤, Syy = 1 m m X i=1 ˜y(i) ˜y(i)⊤, Ui = S−1/2 ii Vi, VxP V ⊤ y = SVD(S−1/2 xx SxyS−1/2 yy ). The projections into the latent space from x are given by (as proved in Appendix B) Ep(z|x)[z] = (I + W ⊤ x Ψ−1 x Wx)−1W ⊤ x Ψ−1 x ˜x, ˜x = x −µx. (4) Evaluated at the MLE, (Bach and Jordan, 2005) show that Equation 4 can be reduced to Ep(z|x)[z] = P 1/2U ⊤ x (x −µx). 2.2.1 Multiple-Battery Factor Analysis Multiple-Battery Factor Analysis (MBFA) (McDonald, 1970; Browne, 1980) is a natural extension of IBFA that models more than two views of observables (i.e. multiple languages), as shown in Figure 2. Formally, for a set of views {x1, ..., xv}, we can write the model as p(z) = N(z; 0, I), p(xi|z) = N(xi; Wiz + µi, Ψi). Similar to IBFA the projections to the latent space are given by Equation 4, and the marginal yields a very similar form N      x1 ... xv  ;   µ1 ... µv  ,   W1W ⊤ 1 +Ψ1. . . W1W ⊤ v ... ... ... WvW ⊤ 1 . . .WvW ⊤ v +Ψv     . Unlike IBFA, a closed form solution for maximising the marginal likelihood of MBFA is unknown. Because of this, we have to resort to iterative approaches as in (Browne, 1980) such as the natural extension of the EM algorithm proposed by (Bach and Jordan, 2005). Defining Mt =  I + W ⊤ t Ψ−1 t Wt −1 , Bt = MtW ⊤ t Ψ−1 t , eΨt+1 = S −SΨ−1 t WtM ⊤ t W ⊤ t+1, the EM updates are given by Wt+1 =SB⊤ t  Mt + BtSB⊤ t −1 , Ψt+1 =Bdiag  ( eΨt+1)11, . . . , ( eΨt+1)vv  , where S is the sample covariance matrix of the concatenated views (derivation provided in Appendix E). (Browne, 1980) shows that, under suitable conditions, the MLE of the parameters of MBFA is uniquely identifiable (up to a rotation that does not affect the method’s performance). We observed this in an empirical study — the solutions we converge to are always a rotation away from each other, irrespective of the parameters’ initialisation. This heavily suggests that any optimum is a global optimum and thus we restrict ourselves to only reporting results we observed when fitting from a single initialisation. The chosen initialisation point is provided as Equation (3.25) of (Browne, 1980). 1741  `book`=`kniga` /buk/ /kn'iga/ Figure 3: Weinrich’s compound model for lexical association between English and Russian. Image from (Neuser, 2017). 3 Multilingual Factor Analysis We coin the term Multilingual Factor Analysis for the application of methods based on IBFA and MBFA to model the generation of multilingual tuples from a shared latent space. We motivate our generative process with the compound model for language association presented by (Weinreich, 1953). In this model a lexical meaning entity (a concept) is responsible for associating the corresponding words in the two different languages. We note that the structure in Figure 3 is very similar to our graphical model for IBFA specified in Figure 1. We can interpret our latent variable as the latent lexical concept responsible for associating (generating) the multilingual language pairs. Most theories that explain the interconnections between languages in the bilingual mind assume that “while phonological and morphosyntactic forms differ across languages, meanings and/or concepts are largely, if not completely, shared” (Pavlenko, 2009). This shows that our generative modelling is supported by established models of language interconnectedness in the bilingual mind. Intuitively, our approach can be summarised as transforming monolingual representations by mapping them to a concept space in which lexical meaning across languages is aligned and then performing retrieval, translation and similarity-based tasks in that aligned concept space. 3.1 Comparison to Direct Methods Methods that learn a direct linear transformation from x to y, such as (Mikolov et al., 2013a; Artetxe et al., 2016; Smith et al., 2017; Lample et al., 2018) could also be interpreted as maximising the conditional likelihood Y k p(y(k)|x(k))= Y k N(y(k); W x(k)+µ, Ψ). As shown in Appendix F, the maximum likelihood estimate for W does not depend on the noise term Ψ. In addition, even if one were to fit Ψ, it is not clear how to utilise it to make predictions as the conditional expectation Ep(y|x(k))[y] = W x(k) + µ, does not depend on the noise parameters. As this method is therefore not robust to noise, previous work has used extensive regularisation (i.e. by making W orthogonal) to avoid overfitting. 3.2 Relation to CCA CCA is a popular method used for multilingual alignment which is very closely related to IBFA, as detailed in (Bach and Jordan, 2005). (Barber, 2012) shows that CCA can be recovered as a limiting case of IBFA with constrained diagonal covariance Ψx = σ2 xI, Ψy = σ2 yI , as σ2 x, σ2 y →0. CCA assumes that the emissions from the latent spaces to the observables are deterministic. This is a strong and unrealistic assumption given that word embeddings are learned from noisy corpora and stochastic learning algorithms. 4 Experiments In this section, we empirically demonstrate the effectiveness of our generative approach on several benchmarks, and compare it with state-of-the-art methods. We first present cross-lingual (wordtranslation) evaluation tasks to evaluate the quality of our multi-lingual word embeddings. As a follow-up to the word retrieval task we also run experiments on cross-lingual sentence retrieval tasks. We further demonstrate the quality of our multi-lingual word embeddings on monolingual word- and sentence-level similarity tasks from (Faruqui and Dyer, 2014b), which we believe provides empirical evidence that the aligned embeddings preserve and even potentially enhance their monolingual quality. 4.1 Word Translation This task is concerned with the problem of retrieving the translation of a given set of source words. We reproduce results in the same environment as (Lample et al., 2018)1 for a fair comparison. We perform an ablation study to assess the effectiveness of our method in the Italian to English (it-en) setting in (Smith et al., 2017; Dinu et al., 2014). 1github.com/Babylonpartners/MultilingualFactorAnalysis, based on github.com/facebookresearch/MUSE. 1742 Method en-es es-en en-fr fr-en en-de de-en en-ru ru-en en-zh zh-en Supervised SVD 77.4 77.3 74.9 76.1 68.4 67.7 47.0 58.2 27.3* 09.3* IBFA 79.5 81.5 77.3 79.5 70.7 72.1 46.7 61.3 42.9 36.9 SVD+CSLS 81.4 82.9 81.1 82.4 73.5 72.4 51.7 63.7 32.5* 25.1* IBFA+CSLS 81.7 84.1 81.9 83.4 74.1 75.7 50.5 66.3 48.4 41.7 Semi-supervised SVD 65.9 74.1 71.0 72.7 60.3 65.3 11.4 37.7 06.8 00.8 IBFA 76.1 80.1 77.1 78.9 66.8 71.8 23.1 39.9 17.1 24.0 AdvR 79.1 78.1 78.1 78.2 71.3 69.6 37.3 54.3 30.9 21.9 SVD+CSLS 73.0 80.7 75.7 79.6 65.3 70.8 20.9 41.5 10.5 01.7 IBFA+CSLS 76.5 83.7 78.6 82.3 68.7 73.7 25.3 46.3 22.1 27.2 AdvR+CSLS 81.7 83.3 82.3 82.1 74.0 72.2 44.0 59.1 32.5 31.4 Table 1: Precision @1 for cross-lingual word similarity tasks. Rows labelled AdvR are copies of Adversarial Refine rows in (Lample et al., 2018). Results marked with a * differ from the ones shown in (Lample et al., 2018) due to pre-processing done on their part. SVD and IBFA in the semi-supervised setting use the pseudo-dictionary, while AdvR uses frequency information. CSLS is the post-processing technique proposed in (Lample et al., 2018). In these experiments we are interested in studying the effectiveness of our method compared to that of the Procrustes-based fitting used in (Smith et al., 2017) without any post-processing steps to address the hubness problem (Dinu et al., 2014). In Table 1 we observe how our model is competitive to the results in (Lample et al., 2018) and outperforms them in most cases. We notice that given an expert dictionary, our method performs the best out of all compared methods on all tasks, except in English to Russian (en-ru) translation where it remains competitive. What is surprising is that, in the semi-supervised setting, IBFA bridges the gap between the method proposed in (Lample et al., 2018) on languages where the dictionary of identical tokens across languages (i.e. the pseudo-dictionary from (Smith et al., 2017)) is richer. However, even though it significantly outperforms SVD using the pseudo-dictionary, it cannot match the performance of the adversarial approach for more distant languages like English and Chinese (en-zh). 4.1.1 Detailed Comparison to Basic SVD We present a more detailed comparison to the SVD method described in (Smith et al., 2017). We focus on methods in their base form, that is without post-processing techniques, i.e. crossdomain similarity local scaling (CSLS) (Lample et al., 2018) or inverted softmax (ISF) (Smith et al., 2017). Note that (Smith et al., 2017) used the scikit-learn 2 implementation of CCA, which uses an iterative estimation of partial least squares. This does not give the same results as the standard CCA procedure. In Table 2 we reproduce the results from (Smith et al., 2017) using the dictionaries and embeddings provided by (Dinu et al., 2014)3 and we compare our method (IBFA) using both the expert dictionaries from (Dinu et al., 2014) and the pseudo-dictionaries as constructed in (Smith et al., 2017). We significantly outperform both SVD and CCA, especially when using the pseudo-dictionaries. 4.2 Word Similarity Tasks This task assesses the monolingual quality of word embeddings. In this experiment, we fit both considered methods (CCA and IBFA) on the entire available dictionary of around 100k word pairs. We compare to CCA as used in (Faruqui and Dyer, 2014b) and standard monolingual word embeddings on the available tasks from (Faruqui and Dyer, 2014b). We evaluate our multilingual embeddings on the following tasks: WS353 (Finkelstein et al., 2002); WS-SIM, WS-REL (Agirre et al., 2009); RG65 (Rubenstein and Goodenough, 1965); MC-30 (Miller and Charles, 1991); MT2A commonly used Python library for scientific computing, found at (Pedregosa et al., 2011). 3http://clic.cimec.unitn.it/ georgiana.dinu/down/ 1743 English to Italian Italian to English English to Italian Italian to English @1 @5 @10 @1 @5 @10 @1 @5 @10 @1 @5 @10 Mikolov et. al. 33.8 48.3 53.9 24.9 41.0 47.4 1.0 2.8 3.9 2.5 6.4 9.1 CCA (Sklearn) 36.1 52.7 58.1 31.0 49.9 57.0 29.1 46.4 53.0 27.0 47.0 52.3 CCA 30.9 48.1 52.7 27.7 45.5 51.0 26.5 42.5 48.1 22.8 40.1 45.5 SVD 36.9 52.7 57.9 32.2 49.6 55.7 27.1 43.4 49.3 26.2 42.1 49.0 IBFA (Ours) 39.3 55.3 60.1 34.7 53.5 59.4 34.7 52.6 58.3 33.7 53.3 59.2 Table 2: Comparisons without post-processing of methods. Results reproduced from (Smith et al., 2017) for fair comparison. Left: Comparisons using the same expert dictionary as (Smith et al., 2017). Right: Comparisons using the pseudo-dictionary from (Smith et al., 2017). Embeddings WS WS-SIM WS-REL RG-65 MC-30 MT-287 MT-771 MEN-TR English 73.7 78.1 68.2 79.7 81.2 67.9 66.9 76.4 IBFA en-de 74.4 79.4 68.3 81.4 84.2 67.2 69.4 77.8 IBFA en-fr 72.4 77.8 65.8 80.5 83.0 68.2 69.6 77.6 IBFA en-es 73.6 78.5 67.0 79.0 83.0 68.2 69.4 77.3 CCA en-de 71.7 76.4 64.0 76.7 82.4 63.0 64.7 75.3 CCA en-fr 70.9 76.4 63.3 76.5 81.4 63.4 65.4 74.9 CCA en-es 70.8 76.3 63.1 76.4 81.2 63.0 65.1 74.7 Table 3: Spearman correlation for English word similarity tasks. First row represents monolingual fasttext vectors (Joulin et al., 2017) in English, the rest are bilingual embeddings. 287; (Radinsky et al., 2011); MT-771 (Halawi et al., 2012), and MEN-TR (Bruni et al., 2012). These tasks consist of English word pairs that have been assigned ground truth similarity scores by humans. We use the test-suite provided by (Faruqui and Dyer, 2014a)4 to evaluate our multilingual embeddings on these datasets. This testsuite calculates similarity of words through cosine similarity in their representation spaces and then reports Spearman correlation with the ground truth similarity scores provided by humans. As shown in Table 3, we observe a performance gain over CCA and monolingual word embeddings suggesting that we not only preserve the monolingual quality of the embeddings but also enhance it. 4.3 Monolingual Sentence Similarity Tasks Semantic Textual Similarity (STS) is a standard benchmark used to assess sentence similarity metrics (Agirre et al., 2012, 2013, 2014, 2015, 2016). In this work, we use it to show that our alignment procedure does not degrade the quality of the embeddings at the sentence level. For both IBFA and CCA, we align English and one other language 4https://github.com/mfaruqui/eval-word-vectors (from French, Spanish, German) using the entire dictionaries (of about 100k word pairs each) provided by (Lample et al., 2018). We then use the procedure defined in (Arora et al., 2016) to create sentence embeddings and use cosine similarity to output sentence similarity using those embeddings. The method’s performance on each set of embeddings is assessed using Spearman correlation to human-produced expert similarity scores. As evidenced by the results shown in Table 4, IBFA remains competitive using any of the three languages considered, while CCA shows a performance decrease. 4.4 Crosslingual Sentence Similarity Tasks Europarl (Koehn, 2005) is a parallel corpus of sentences taken from the proceedings of the European parliament. In this set of experiments, we focus on its English-Italian (en-it) sub-corpus, in order to compare to previous methods. We report results under the framework of (Lample et al., 2018). That is, we form sentence embeddings using the average of the tf-idf weighted word embeddings in the bag-of-words representation of the sentence. Performance is averaged over 2,000 randomly chosen source sentence queries and 200k 1744 Embeddings STS12 STS13* STS14 STS15 STS16 English 58.1 69.2 66.7 72.6 70.6 IBFA en-de 58.1 70.2 66.8 73.0 71.6 IBFA en-fr 58.0 70.0 66.7 72.8 71.4 IBFA en-es 57.9 69.7 66.6 72.9 71.7 CCA en-de 56.7 67.5 65.7 73.1 70.5 CCA en-fr 56.7 67.9 65.9 72.8 70.8 CCA en-es 56.6 67.8 65.9 72.9 70.8 Table 4: Spearman correlation for Semantic Textual Similarity (STS) tasks in English. All results use the sentence embeddings described in (Arora et al., 2016). First row represents monolingual FastText vectors (Joulin et al., 2017) in English, the rest are bilingual embeddings. *STS13 excludes the proprietary SMT dataset. English to Italian Italian to English @1 @5 @10 @1 @5 @10 Mikolov et. al.✓ 10.5 18.7 22.8 12.0 22.1 26.7 Dinu et al.✓ 45.3 72.4 80.7 48.9 71.3 78.3 Smith et al.✓ 54.6 72.7 78.2 42.9 62.2 69.2 SVD 40.5 52.6 56.9 51.2 63.7 67.9 IBFA (Ours) 62.7 74.2 77.9 64.1 75.2 79.5 SVD + CSLS 64.0 75.8 78.5 67.9 79.4 82.8 AdvR + CSLS 66.2 80.4 83.4 58.7 76.5 80.9 IBFA + CSLS 68.8 80.7 83.5 70.2 80.8 84.8 Table 5: Sentence translation precisions @1, @5, @10 on 2,000 English-Italian pairs samples from a set of 200k sentences from Europarl (Koehn, 2005) on Dinu embeddings. AdvR is copied from Adversarial - Refined in (Lample et al., 2018). Rows with ✓copied from (Smith et al., 2017). target sentences for each language pair. Note that this is a different set up to the one presented in (Smith et al., 2017), in which an unweighted average is used. The results are reported in Table 5. As we can see, IBFA outperforms all prior methods both using nearest neighbour retrieval, where it has a gain of 20 percent absolute on SVD, as well as using the CSLS retrieval metric. 4.5 Alignment of three languages In an ideal scenario, when we have v languages, we wouldn’t want to train a transformation between each pair, as that would involve storing O(v2) matrices. One way to overcome this problem is by aligning all embeddings to a common space. In this exploratory experiment, we constrain ourselves to aligning three languages at the same time, but the same methodology could be applied to an arbitrary number of languages. MBFA, the extension of IBFA described in Section 2.2.1 naturally lends itself to this task. What is needed for training this method is a dictionary of word triples across the three languages considered. We construct such a dictionary by taking the intersection of all 6 pairs of bilingual dictionaries for the three languages provided by (Lample et al., 2018). We then train MBFA for 20,000 iterations of EM (a brief analysis of convergence is provided in Appendix G). Alternatively, with direct methods like (Smith et al., 2017; Lample et al., 2018) one could align all languages to English and treat that as the common space. We compare both approaches and present their results in Table 6. As we can see, both methods experience a decrease in overall performance when compared to models fitted on just a pair of languages, however MBFA performs better overall. That is, the direct approaches preserve their performance on translation to and from English, but translation from French to Italian decreases significantly. Meanwhile, MBFA suffers a decrease in each pair of languages, however it retains competitive performance to the direct methods on English translation. It is worth noting that as the number of aligned languages v increases, there are O(v) pairs 1745 Method en-it it-en en-fr fr-en it-fr fr-it SVD 71.0 72.4 74.9 76.1 78.3 72.9 MBFA 71.9 73.4 76.7 78.1 82.6 77.5 SVD+CSLS 76.2 77.9 81.1 82.4 84.5 79.8 MBFA+CSLS 77.4 77.7 81.9 82.1 86.8 81.9 Table 6: Precision @1 when aligning English, French and Italian embeddings to a common space. For SVD, this common space is English, while for MBFA it is the latent space. of languages, one of which is English, and O(v2) pairs in which English does not participate. This suggests that MBFA may generalise past three simultaneously aligned languages better than the direct methods. 4.6 Generating Random Word Pairs We explore the generative process of IBFA by synthesising word pairs from noise, using a trained English-Spanish IBFA model. We follow the generative process specified in Equation 1 to generate 2,000 word vector pairs and then we find the nearest neighbour vector in each vocabulary and display the corresponding words. We then rank these 2,000 pairs according to their joint probability under the model and present the top 28 samples in Table 7. Note that whilst the sampled pairs are not exact translations, they have closely related meanings. The examples we found interesting are dreadful and despair; frightening and brutality; crazed and merry; unrealistic and questioning; misguided and conceal; reactionary and conservatism. 5 Conclusion We have introduced a cross-lingual embedding alignment procedure based on a probabilistic latent variable model, that increases performance across various tasks compared to previous methods using both nearest neighbour retrieval, as well as the CSLS criterion. We have shown that the resulting embeddings in this aligned space preserve their quality by presenting results on tasks that assess word and sentence-level monolingual similarity correlation with human scores. The resulting embeddings also significantly increase the precision of sentence retrieval in multilingual settings. Finally, the preliminary results we have shown on aligning more than two languages at the same time provide an exciting path for future research. en es es→en particular efectivamente effectively correspondingly esto this silly ir´onicamente ironic frightening brutalidad brutality manipulations intencionadamente intentionally ignore contraproducente counterproductive fundamentally entendido understood embarrassed enojado angry terrified casualidad coincidence hypocritical obviamente obviously wondered inc´omodo uncomfortable oftentimes apostar betting unwittingly traicionar betray mishap ir´onicamente ironically veritable empero however overpowered deshacerse fall apart crazed divertidos merry frightening iron´ıa irony dreadful desesperaci´on despair instituting restablecimiento recover unrealistic cuestionamiento questioning regrettable err´oneos mistaken irresponsible preocupaciones concerns obsession irremediablemente hopelessly embodied voluntad will misguided esconder conceal perspective contestaci´on answer reactionary conservadurismo conservatism Table 7: Random pairs sampled from model, selected top 28 ranked by confidence. Proper nouns, and acronyms (names and surnames) were removed from the list. Third column represents a correct translation from Spanish to English. 1746 References Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pas¸ca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and wordnet-based approaches. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 19–27. Association for Computational Linguistics. Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, et al. 2015. Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability. In Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015), pages 252–263. Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. Semeval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th international workshop on semantic evaluation (SemEval 2014), pages 81–91. Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 497–511. Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. * sem 2013 shared task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, volume 1, pages 32–43. Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 385–393. Association for Computational Linguistics. Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2016. A simple but tough-to-beat baseline for sentence embeddings. International Conference on Learning Representations, 2017. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2289–2294. Ben Athiwaratkun and Andrew Gordon Wilson. 2017. Multimodal word distributions. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, pages 427–431,Valencia, Spain, April 3-7. Francis R Bach and Michael I Jordan. 2005. A probabilistic interpretation of canonical correlation analysis. Computer Science Division, University of California Berkeley. David Barber. 2012. Bayesian reasoning and machine learning, pages 474–475. Cambridge University Press. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Michael W Browne. 1979. The maximum-likelihood solution in inter-battery factor analysis. British Journal of Mathematical and Statistical Psychology, 32(1):75–86. Michael W Browne. 1980. Factor analysis of multiple batteries by maximum likelihood. British Journal of Mathematical and Statistical Psychology, 33(2):184–199. Elia Bruni, Gemma Boleda, Marco Baroni, and NamKhanh Tran. 2012. Distributional semantics in technicolor. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 136–145. Association for Computational Linguistics. Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2014. Improving zero-shot learning by mitigating the hubness problem. arXiv preprint arXiv:1412.6568. Manaal Faruqui and Chris Dyer. 2014a. Community evaluation and exchange of word vectors at wordvectors.org. In Proceedings of ACL: System Demonstrations. Manaal Faruqui and Chris Dyer. 2014b. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 462–471. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on information systems, 20(1):116–131. 1747 Zoubin Ghahramani, Geoffrey E Hinton, et al. 1996. The em algorithm for mixtures of factor analyzers. Technical report, Technical Report CRG-TR-96-1, University of Toronto. Guy Halawi, Gideon Dror, Evgeniy Gabrilovich, and Yehuda Koren. 2012. Large-scale learning of word relatedness with constraints. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1406– 1414. ACM. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, pages 427–431,Valencia, Spain, April 3-7, 2017, Volume 2. Thomas R Knapp. 1978. Canonical correlation analysis: A general parametric significance-testing system. Psychological Bulletin, 85(2):410. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5, pages 79–86. Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In Proceedings of the 6th International Conference on Learning Representations. Ang Lu, Weiran Wang, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Deep multilingual correlation for improved word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 250–256. Roderick P McDonald. 1970. Three common factor models for groups of variables. Psychometrika, 35(1):111–128. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. George A Miller and Walter G Charles. 1991. Contextual correlates of semantic similarity. Language and cognitive processes, 6(1):1–28. Hannah Neuser. 2017. Source Language of Lexical Transfer in Multilingual Learners: A Mixed Methods Approach. Ph.D. thesis, Department of English, Stockholm University. Aneta Pavlenko. 2009. Conceptual representation in the bilingual lexicon and second language vocabulary learning. The bilingual mental lexicon: Interdisciplinary approaches, pages 125–160. Karl Pearson. 1901. Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11):559–572. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Kaare Brandt Petersen, Michael Syskind Pedersen, et al. 2008. The matrix cookbook. Technical University of Denmark, 7(15):510. Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: computing word relatedness using temporal semantic analysis. In Proceedings of the 20th international conference on World wide web, pages 337–346. ACM. Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633. Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. Proceedings of the 5th International Conference on Learning Representations. Charles Spearman. 1904. ” general intelligence,” objectively determined and measured. The American Journal of Psychology, 15(2):201–292. Louis L Thurstone. 1931. Multiple factor analysis. Psychological Review, 38(5):406. Michael E Tipping and Christopher M Bishop. 1999. Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3):611–622. Ledyard R Tucker. 1958. An inter-battery method of factor analysis. Psychometrika, 23(2):111–136. Uriel Weinreich. 1953. Languages in contact. findings andproblems. New York: Linguistic Circle of New York and The Hague: Mouton. 1748 Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011. A Joint Distribution We show the form of the joint distribution for 2 views. Concatenating our data and parameters as below, we can use Equation (3) of (Ghahramani et al., 1996) to write m = " x y # , W = " Wx Wy # Ψ = " Ψx 0 0 Ψy # , µ = " µx µy # p(m, z|θ) = N " m z # ; " µ 0 # , Σm,z ! (5) Σm,z = " W W ⊤+ Ψ W W ⊤ I # It is clear that this generalises to any number of views of any dimension, as the concatenation operation does not make any assumptions. B Projections to Latent Space Ep(z|x)[z] We can query the joint Gaussian in 5 using rules from (Petersen et al., 2008) Sections (8.1.2, 8.1.3) and we get p(z|x) = N  z; W ⊤ x Σ−1 x ˜x, I −W ⊤ x Σ−1 x Wx  E[z|x] = W ⊤ x Σ−1 x ˜x C Derivation for the Marginal Likelihood We want to compute p(x, y|θ) so that we can then learn the parameters θ = {θx, θy}, θi = {µi, Wi, Ψi, } by maximising the marginal likelihood as is done in Factor Analysis. From the joint p(m, z|θ), again using rules from (Petersen et al., 2008) Sections (8.1.2) we get p(m|θ) = p(x, y|θ) = N " x y # ; " µx µy # , W W T + Ψ ! For the case of two views, the joint probability can be factored as p(x, y|θ) = p(x|θx)p(y|x, θ) p(x|θx) = N (x; µx, Σx) p(y|x, θ) = N  y; WyW ⊤ x Σ−1 x ˜x + µy, Σy|x  = N y; WyE[z|x] + µy, Σy|x  , where Σx = WxW ⊤ x + Ψx Σy|x = Σy −WyW ⊤ x Σ−1 x WxW ⊤ y D Scaled Reconstruction Errors log p(x, y|θ) = log p∗(x|θx) + log p∗(y|x, θ) −1 2(log |2πΣy|x| + log |2πΣx|) log p∗(y|x, θ) = −1 2||˜y −WyE[z|x]||2 Σy|x log p∗(x|θx) = −1 2||x −µx||2 Σx = −1 2||Σ −1 2 x ˜x||2 Setting A = ΨxΣ−1 x Ψx, we can re-parametrise as log p∗(x|θx) = −1 2||ΨxΣ−1 x ˜x||2 A = −1 2||(Σx −WxW ⊤ x )Σ−1 x ˜x||2 A = −1 2||˜x −WxW ⊤ x Σ−1 x ˜x||2 A = −1 2||˜x −WxE[z|x]||2 A E Expectation Maximisation for MBFA Define ˜x =   x1 −µ1 ... xv −µ1  , W =   W1 ... Wv   Ψ =   Ψ1 0 ... 0 Ψv  = Bdiag(Ψ1, . . . , Ψv) Hence p(˜x|z; Ψ, W ) = N(˜x|W z, Ψ) 1749 Method EN-IT IT-EN EN-FR FR-EN IT-FR FR-IT MBFA-1K 71.9 73.3 76.7 78.2 82.4 77.5 MBFA-20K 71.9 73.4 76.7 78.1 82.6 77.5 MBFA-1K+CSLS 77.5 77.6 81.9 82.0 86.8 82.1 MBFA-20K+CSLS 77.4 77.7 81.9 82.1 86.8 81.9 Table 8: Precision @1 between MBFA fitted for 1K iterations and MBFA fitted for 20K iterations. This follows the same form as regular factor analysis, but with a block-diagonal constraint on Ψ. Thus by Equations (5) and (6) of (Ghahramani et al., 1996), we apply EM as follows. E-Step: Compute E[z|x] and E[zz⊤|x] given the parameters θt = {Wt, Ψt}. E[z(i)|˜x(i)] = Bt ˜x(i) E[z(i)z(i)⊤|˜x(i)] = I −BtWt + Bt ˜x(i) ˜x(i)⊤B⊤ t = Mt + Bt ˜x(i) ˜x(i)⊤B⊤ t (6) where Mt =  I + W ⊤ t Ψ−1 t Wt −1 Bt = W ⊤ t (Ψt + WtW ⊤ t )−1 = MtW ⊤ t Ψ−1 t . (7) Equation 6 is obtained by applying the Woodbury identity, and Equation 7 by applying the closely related push-through identity, as found in Section 3.2 of (Petersen et al., 2008). M-Step: Update parameters θt+1 ={Wt+1, Ψt+1}. Define S = 1 m m X i=1 ˜x(i) ˜x(i)⊤ By first observing 1 m m X i=1 ˜x(i)E[z(i)|˜x(i)]⊤= SB⊤ t 1 m m X j=1 E[z(j)z(j)⊤|˜x(j)] = Mt + BtSB⊤ t , update the parameters as follows. Wt+1= SB⊤ t  I −BtWt + BtSB⊤ t −1 = SB⊤ t  Mt + BtSB⊤ t −1 eΨt+1 = 1 m m X i=1 ˜x(i) ˜x(i)⊤−Wt+1E[z(i)|˜x(i)]˜x(i)⊤ = S −1 m m X i=1 Wt+1Bt ˜x(i) ˜x(i)⊤ = S −Wt+1BtS = S −SB⊤ t W ⊤ t+1 Imposing the block diagonal constraint, Ψt+1 = Bdiag  ( eΨt+1)11, . . . , ( eΨt+1)vv  where ( ˜Ψ)ii = Ψi. F Independence to Noise in Direct Methods We are maximising the following quantity with respect to θ = {W , µ, Ψ} p(Y |X, θ)= Y i p(y(i)|x(i), θ) = Y i N(y(i); W x(i) + µ, Ψ) log p(Y |X, θ)=−1 2 X i ||y(i)−W x(i)||2 Ψ−C ! Then the partial derivative Q = ∂log p(Y |X,θ) ∂W is proportional to Q ∝ X i Ψ−1(y(i)−W x(i))x(i)⊤ ! ∝Ψ−1 X i y(i)x(i)⊤−W X i x(i)x(i)⊤ ! 1750 0 1000 2000 3000 4000 5000 Iterations 3040000 3030000 3020000 3010000 3000000 NLL 0 1000 2000 3000 4000 5000 Iterations 3044250 3044000 3043750 3043500 3043250 3043000 3042750 NLL Figure 4: Training curve of EM algorithm over the first 5,000 iterations. It is clear that the procedure quickly finds a good approximation to the optimal parameters and then slowly converges to the real optimum. Left picture shows the entire training curve, while the right picture starts from iteration 100. The maximum likelihood is achieved when ∂log p(Y |X, θ) ∂W = 0, and since Ψ−1 has an inverse (namely Ψ), this means that W X i x(i)x(i)⊤= X i y(i)x(i)⊤ It is clear from here that the MLE of W does not depend on Ψ, thus we can conclude that adding a noise parameter to this directed linear model has no effect on its predictions. G Learning curve of EM Figure 4 shows the negative log-likelihood of the three language model over the first 5,000 iterations. The precision of the learned model is very close when evaluated at iteration 1,000 and at iteration 20,000 as seen in Table 8. This suggests that the model need not be trained to full convergence to work well.
2019
170
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1751–1764 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1751 Meaning to Form: Measuring Systematicity as Information Tiago Pimentel♣Arya D. McCarthy♥Dami´an E. Blasi♠Brian Roark♦Ryan Cotterell♥† ♣Kunumi, ♥Johns Hopkins University, ♠University of Z¨urich & MPI SHH, ♦Google, †University of Cambridge [email protected], [email protected], [email protected], [email protected], [email protected] Abstract A longstanding debate in semiotics centers on the relationship between linguistic signs and their corresponding semantics: is there an arbitrary relationship between a word form and its meaning, or does some systematic phenomenon pervade? For instance, does the character bigram gl have any systematic relationship to the meaning of words like glisten, gleam and glow? In this work, we offer a holistic quantification of the systematicity of the sign using mutual information and recurrent neural networks. We employ these in a data-driven and massively multilingual approach to the question, examining 106 languages. We find a statistically significant reduction in entropy when modeling a word form conditioned on its semantic representation. Encouragingly, we also recover wellattested English examples of systematic affixes. We conclude with the meta-point: Our approximate effect size (measured in bits) is quite small—despite some amount of systematicity between form and meaning, an arbitrary relationship and its resulting benefits dominate human language. 1 Introduction Saussure (1916) expounded on the arbitrariness of the sign. Seen as a critical facet of human language (Hockett, 1960), the idea posits that a sign in human language (a word, in our inquiry) is structured at two levels: the signified, which captures its meaning, and the signifier, which has no meaning but manifests the form of the sign. Saussure himself, however, also documented instances of sound symbolism in language (Saussure, 1912). In this paper, we present computational evidence of relevance to both aspects of Saussure’s work. While dominant among linguists, arbitrariness has been subject to both long theoretical debate (Wilkins, 1668; Eco, 1995; Johnson, 2004; Pullum H ( W | V) I (W;V) Wordform (W) Meaning (V) H (W) Wordform (W) Language Model Language Model Figure 1: We use two independent language models to estimate the mutual information between word forms and meaning—i.e. systematicity, as per our definition. The language models provide upper bounds on H(W) and H(W | V), which can be used to estimate I(W;V). This estimate is as good as the upper bounds are tight— see discussion in §3.4. and Scholz, 2007) and numerous empirical and experimental studies (Hutchins, 1998; Bergen, 2004; Monaghan et al., 2011; Abramova and Fern´andez, 2016; Blasi et al., 2016; Gutierrez et al., 2016; Dautriche et al., 2017). Taken as a whole, these studies suggest non-trivial interactions in the form– meaning interface between the signified and the signifier (Dingemanse et al., 2015). Although the new wave of studies on form– meaning associations range across multiple languages, methods and working hypotheses, they all converge on two important dimensions: 1. The description of meaning is parameterized with pre-defined labels—e.g., by using existing ontologies like List et al. (2016). 2. The description of forms is restricted to the presence, absence or sheer number of occurrence of particular units (such as phones, syllables or handshapes). We take an information-theoretic approach to quan1752 tifying the relationship between form and meaning using flexible representations in both domains, rephrasing the question of systematicity: How much does certainty of one reduce uncertainty of the other? This gives an operationalization as the mutual information between form and meaning, when treating both as random variables—the signifier as a word’s phone string representation in the International Phonetic Alphabet (IPA), and the signified as a distributed representation (Mikolov et al., 2013) for that word’s lexical semantics, devoid of morphological or other subword information. We show how to estimate mutual information as the difference in entropy of two phone-level LSTM language models—one of which is conditioned on the semantic representation. This operationalization, depicted in Figure 1, allows us to express the global effect of meaning on form in vocabulary datasets with wide semantic coverage. In addition to this lexicon-level characterization of systematicity, we also show that this paradigm can be leveraged for studying more narrowlydefined form-meaning associations such as phonesthemes—submorphemic, meaning-bearing units— in the style of Gutierrez et al. (2016). These short sound sequences typically suggest some aspect of meaning in the words that contain them, like -ump for rounded things in English. Previous computational studies, whether focusing on characterizing the degree of systematicity (Monaghan et al., 2014b,a, 2011; Shillcock et al., 2001), discovering phonesthemes (Liu et al., 2018), or both (Gutierrez et al., 2016), have invariably framed systematicity in terms of distances and/or similarities–the relation between word-form distance/similarity on the one hand (e.g., based on string edit distance) and semantic distance/similarity on the other (e.g., as defined within a semantic vector space). Our methods have the virtue of not relying on some predefined notion of similarity or distance in either domain for our measurement of systematicity. Empirically, we focus on two experimental regimes. First, we focus on a large corpus (CELEX) of phone transcriptions in Dutch, English, and German. In these three languages, we find a significant yet small mutual information even when controlling for grammatical category. Second, we perform a massively multilingual exploration of sound– meaning systematicity (§5.1) on the NorthEuraLex corpus (Dellert and J¨ager, 2017). This corpus contains expanded Swadesh lists in 106 languages using a unified alphabet of phones. It contains 1016 words in each language, which is often not enough to detect systematicity—we trade the coverage of CELEX for the breadth of languages. Nevertheless, using our information-theoretic operationalization, in most of the languages considered (87 of 106), we find a statistically significant reduction in entropy of phone language modeling by conditioning on a word’s meaning (§5.2). Finally, we find a weak positive correlation between our computed mutual information and human judgments of form– meaning relatedness. 2 Systematic form-meaning associations 2.1 Arbitrariness The lack of a forceful association between form and meaning is regarded as a design feature of language (Hockett, 1960). This arbitrariness of the sign is thought to provide a flexible and efficient way for encoding new referents (Monaghan et al., 2011). It has been claimed that it enhances learnability because newly acquired concepts can be paired to any word, instead of devising the word that properly places the concept in one’s constellation of concepts (Gasser et al., 2005), and that it facilitates mental processing compared to an icon-based symbol system, in that the word–meaning map can be direct (Lupyan and Thompson-Schill, 2012). Most importantly, decoupling form from meaning allows communication about things that are not directly grounded in percepts (Clark, 1998; Dingemanse et al., 2015). This opens the door to another of Hockett (1960)’s design features of language: duality of patterning (Martinet, 1949), the idea that language exists on the level of meaningless units (the distinctive; typically phonemes) composed to form the level of meaningful units (the significant; typically morphemes). 2.2 Non-arbitrariness and systematicty Contemporary research has established that nonarbitrary form-meaning associations in vocabulary are more common and diverse than previously thought (Dingemanse et al., 2015). Some non-arbitrary associations might be found repeatedly across unrelated languages presumably due to species-wide cognitive biases (Blasi et al., 2016), others are restricted to language-specific word classes that allow for more or less transparent iconic mappings – so-called ideophones, see Dingemanse (2012; 2018) – and yet others might emerge 1753 from properties of discourse and usage rather than meaning per se (Piantadosi et al., 2011). Systematicity is meant to cover all cases of nonarbitrary form-meaning associations of moderate to large presence in a vocabulary within a language (Dingemanse et al., 2015). In morphology-rich languages, systematic patterns are readily apparent: for instance, across a large number of languages recurring TAM markers or transitivity morphemes could be used to detect verbs, whereas case markers or nominalizing morphemes can serve as a cue for nouns. Yet a sizable portion of research on systematicity is geared towards subtle patterns at the word root level, beyond any ostensive rules of grammar. By and large, systematicity is hailed as a trait easing language acquisition. It reduces the radical uncertainty humans find when first encountering a new word by providing clues about category and meaning (Monaghan et al., 2014a). Systematic patterns can display a large scope within a language: for instance, systematic associations distinguishing nouns from verbs have been found in every language where a comparison was performed systematically (e.g. Monaghan et al., 2007). But at its extreme, systematicity would manifest as an ontology encoded phonetically, e.g., all plants begin with the letter ‘g’, and animals with the letter ‘z’ (Wilkins, 1668; Eco, 1995). As Dingemanse et al. (2015) note, a system of similar forms expressing similar meanings “would lead to high confusability of the very items most in need of differentiation”. 2.3 Phonesthemes One particular systematic pattern comes in the form of phonesthemes (Firth, 1964). These are submorphemic and mostly unproductive affixal units, usually flagging a relatively small semantic domain. A classic example in English is gl-, a prefix for words relating to light or vision, e.g. glimmer, glisten, glitter, gleam, glow and glint (Bergen, 2004). Phonesthemes have psychological import; they can be shown to accelerate reaction times in language processing (Hutchins, 1998; Bergen, 2004; Magnus, 2000). They have been attested in English (Wallis, 1699; Firth, 1930; Marchand, 1959; Bolinger, 1949, 2014), Swedish (Abelin, 1999), Japanese (Hamano, 1998), Ojibwa (Rhodes, 1981), Hmong (Ratliff, 1992), and myriad Austronesian languages (McCune, 1985; Blust, 1988). In fact, as Bergen (2004) notes, “every systematic study of a particular language has produced results suggesting that that language has phonesthemes”. Liu et al. (2018) survey computational approaches for identifying phonesthemes. 3 Estimating Systematicity with Information Theory 3.1 Notation and formalization Following Shillcock et al. (2001), we define a sign as a tuple (v(i),w(i)) of a word’s distributional semantic representation (a vector) and its phone string representation (a word form). For a natural language with a set of phones Σ (including a special end-of-string token), we take the space of word forms to be Σ∗, with w(i) ∈Σ∗. We treat the semantic space as a high-dimensional real vector space Rd, with v(i) ∈Rd. The particular v(i) and w(i) are instances of random variables V and W. Further, we want to hunt down potential phonesthemes; we define these to be phone sequences which, compared to others of their length, have a larger mutual information with their meaning. We eliminate positional confounds by examining only words’ prefixes w<k and suffixes w>k.1 3.2 A variational upper bound Entropy, the workhorse of information theory, captures the uncertainty of a probability distribution. In our language modeling case, the quantity is H(W) ≡∑ w∈Σ∗ Pr(w)log 1 Pr(w). (1) Entropy is the average number of bits required to represent a string in the distribution, under an optimal coding scheme. When computing it, we are faced with two problems: We do not know the distribution over word-forms Pr(W) and, even if we did, computing Equation 1 requires summing over the infinite set of possible strings Σ∗. We follow Brown et al. (1992) in tackling these problems together. Approximating Pr(W) with any known distribution Q(W), we get a variational upper bound on H(W) from their cross-entropy, i.e. H(W) ≤HQ(W) (2a) = ∑ w∈Σ∗ Pr(w)log 1 Q(w). (2b) 1 In line with, e.g., Cucerzan and Yarowsky (2003), we treat affixes as word-initial or word-final sequences, regardless of their status as attested morphological entities. 1754 Equation 2b still requires knowledge of Pr(W) and involves an infinite sum, though. Nonetheless, we can use a finite set ˜W of samples from Pr(W) to get an empirical estimate of this value. HQ(W) ≈1 N N ∑ i=1 log 1 Q ˜w(i), ˜w(i) ∈˜W ∼Pr(W) (3) with equality if we let N →∞.2 We now use Equation 3 as an estimate for the entropy of a lexicon. Conditional entropy Conditional entropy reflects the average additional number of bits needed to represent an event, given knowledge of another random variable. If V completely determines W, then the quantity is 0. Conversely, if the variables are independent, then H(W) = H(W | V). Analogously to the unconditional case, we can get an upper bound for the conditional entropy by approximating Pr(W | V) with another distribution Q. HQ(W | V) ≈1 N N ∑ i=1 log 1 Q ˜w(i) | ˜v(i) (4) where ( ˜w(i), ˜v(i)) ∼Pr(W,V). 3.3 Systematicity as mutual information Mutual information (I) measures the amount of information (bits) that the knowledge of either form or meaning provides about the other. It is the difference between the entropy and conditional entropy: I(W;V) ≡H(W)−H(W | V) (5a) ≈HQ(W)−HQ(W | V). (5b) Systematicity will thus be framed as (statistically significant) nonzero mutual information I(V;W). 3.4 Learning Q Our method relies on decomposing mutual information into a difference of entropies, as shown in Equation 5b. We use upper bounds on both the entropy and conditional entropy measures, so our calculated mutual information is an estimate. This estimate is as good as our bounds are tight, being perfect when Pr(W) = Q(W) and Pr(W|V) = Q(W|V). Still, as we subtract two upper bounds, we cannot guarantee that our MI estimate approaches the real MI from above or below because we do not know which of the entropies’ bounds are 2 This is a direct consequence of the law of large numbers. tighter. There is nothing principled that we can say about the result, except that it is consistent. The procedure for learning the distribution Q is, thus, essential to our method. We must first define a family of distributions Ψ from which Q is learned. Then, we learn Q ∈Ψ by minimizing the righthand-size of Equation 2b—which corresponds to maximum likelihood estimation Q = arg inf q∈Ψ 1 N N ∑ i=1 log 1 q ˜w(i). (6) In this work, we employ a state-of-the-art phonelevel LSTM language model as our Ψ to approximate Pr(W) as closely as possible. 3.5 Recurrent neural LM A phone-level language model (LM) provides a probability distribution over Σ∗: Pr(w) = |w|+1 ∏ i=1 Pr(wi | w<i). (7) Recurrent neural networks are great representation extractors, being able to model long dependencies—up to a few hundred tokens (Khandelwal et al., 2018)—and complex distributions Pr(wi | w<i) (Mikolov et al., 2010; Sundermeyer et al., 2012). We choose LSTM language models in particular, the state-of-the-art for character-level language modeling (Merity et al., 2018).3 Our architecture embeds a word—a sequence of tokens wi ∈Σ—using an embedding lookup table, resulting in vectors zi ∈Rd. These are fed into an LSTM, which produces high-dimensional representations of the sequence (hidden states): h j = LSTM(h j−1,zj), j ∈{1,...,n+1}, (8) where h0 is the zero vector. Each hidden state is linearly transformed and fed into a softmax function, producing a distribution over the next phone: Pr(wi | w<i) = softmax(Whi +b). 4 Experimental Design 4.1 Datasets We first analyze the CELEX database (Baayen et al., 1995), which provides many word types for Dutch, English, and German. In measuring systematicity, we control for morphological variation by only considering monomorphemic words, as in 3 Our tokens are phones rather than graphemes. 1755 Dautriche et al. (2017). Our type-level resource contains lemmata, eliminating the noisy effect of morphologically inflected forms. CELEX contains 6040 English, 3864 German, and 3603 Dutch lemmata for which we have embeddings. While CELEX is a large, well annotated corpus, it only spans three lexically related languages. The NorthEuraLex database (Dellert and J¨ager, 2017) is thus appealing. It is a lexicon of 1016 “basic” concepts, written in a unified IPA scheme and aligned across 107 languages that span 21 language families (including isolates).4 While we cannot restrict NorthEuraLex to monomorphemic words (because it was not annotated by linguists and segmentation models are weak for its low-resource languages), it mainly contains word types for basic concepts— e.g., animal names or verbs—so we are comfortable in the modeling assumption that the words are not decomposable into multiple morphemes. Unlike Dautriche et al. (2017), who draw lexicons from Wikipedia, or Otis and Sagi (2008), we directly use a phone string representation, rather than their proxy of using each language’s orthography. This makes our work the first to quantify the interface between phones and meaning in a massively multilingual setting. Blasi et al. (2016) is the only large-scale exploration of phonetic representations that we find. They examine 40 aligned concepts over 4000 languages and identify that sound correspondences exist across the vast majority. Their resource (Wichmann et al., 2018) does not have enough examples to train our language models, and we add to their findings by measuring a relationship between form and meaning, rather than form given meaning. 4.2 Embeddings We use pre-trained WORD2VEC representations as meaning vectors for the basic concepts. For CELEX, specific representations were pretrained for each of the three languages.5 For NorthEuraLex, as its words are concept aligned, we use the same English vectors for all languages. Pragmatically, we choose English because its vectors have the largest coverage of the lexicon. This does not mean that we assume that semantic spaces 4 We omit Mandarin; the absence of tone annotations leaves its phonotactics greatly underspecified. All reported results are for the remaining 106 languages. 5 We use Google’s WORD2VEC representations pre-trained in Google News corpus for English, while WORD2VEC was trained using Wikipedia dumps for German and Dutch with default hyper-parameters. across languages to be strictly comparable. In fact, we would expect that more direct methods of estimating these vectors would be preferable if they were practical. Note that the methods described above are likely underestimating the semantic systematicity in the data, for a couple of reasons. First, WORD2VEC and other related methods have been shown to do a better job at capturing general relatedness rather than semantic similarity per se (Hill et al., 2015). Second, our use of the English vectors across the concept-aligned corpora is a somewhat coarse expedient. To the extent that the English serves as a poor model for the other languages, we should expect smaller MI estimates. In short, we have chosen easy-to-replicate methods based on commonly used models, rather than extensively tuning our approach for these experiments, possibly at the expense of the size of the effect we observe. To reduce spurious fitting to noise in the dataset, we reduce the dimensionality of these vectors from the original 300 to d while capturing maximal variance, using principal components analysis (PCA). These resulting d-dimensional vectors are kept fixed while training the conditional language model. Each d-dimensional vector v is linearly transformed to serve as the initial hidden state of the conditional LSTM language model: h0 =W(v)v+b(v) h j =LSTM(hj−1,zj), j ∈{1,...,n+1}. We reject morphologically informed embeddings (e.g., Bojanowski et al., 2017) because this would be circular: We cannot question the arbitrariness of the form–meaning interface if the meaning representations are constructed with explicit information from the form. This is the same reason that we do not fine-tune the embeddings—our goal is to enforce as clean a separation as possible of model and form, then suss out what is inextricable. 4.3 Controlling for grammatical category The value of WORD2VEC comes from distilling more than just meaning. It also encodes the grammatical classes of words. Unfortunately, this is a trivial source of systematicity: if a language’s lemmata for some class follow a regular pattern (such as the verbal infinitive endings in Romance languages), our model will have uncovered something meaningless. Prior work—e.g., (Dautriche et al., 2017; Gutierrez et al., 2016)—does not account for 1756 this. To isolate factors like these, we can estimate the mutual information between word form and meaning, while conditioning on a third factor. The expression is similar to Equation 5a: I(W;V | C) ≡H(W | C)−H(W | V,C), (9) where C is our third factor—in this case, grammatical class.6 Both CELEX and NorthEuraLex are annotated with grammatical classes for each word. We create a lookup embedding for each class in a language, then use the resulting representation as an initial hidden state to the LSTM (h0 = c). When conditioning on both meaning and class, we concatenate half-sized representations of the meaning (pre-trained) and class to create the first hidden state (h0 = [c′;W(v)v′ +b(v)]). 4.4 Hypothesis testing We follow Gutierrez et al. (2016) and Liu et al. (2018) in using a permutation test to assess our statistical significance. In it, we randomly swap the sign of I values for each word, showing mutual information is significantly positive. Our null hypothesis, then, is that this value should be 0. Recomputing the average mutual information over many shufflings gives rise to an empirical p-value: asymptotically, it will be twice the fraction of permutations with a higher mutual information than the true lexicon. In our case, we used 100,000 random permutations. 4.5 Hyperparameters and optimization We split both datasets into ten folds, using one fold for validation, another for testing, and the rest for training. We optimize all hyper-parameters with 50 rounds of Bayesian optimization—this includes the number of layers in the LSTM, its hidden size, the PCA size d used to compress the meaning vectors, and a dropout probability. Such an optimization is important to get tighter bounds for the entropies, as discussed in §3.4. We use a Gaussian process prior and maximize the expected improvement on the validation set, as in Snoek et al. (2012). 7 5 Results and Analysis 5.1 Identifying systematicity We find statistically significant nonzero mutual information in all three CELEX languages (Dutch, English, and German), using a permutation test to establish significance. This gives us grounds to reject the null hypothesis. We also find a statistically significant mutual information when conditioning entropies in words’ grammar classes. These results are summarized in Table 1. But how much could the mutual information have been? A raw number of bits is not easily interpretable, so we provide another informationtheoretic quantity, the uncertainty coefficient, expressing the fraction of bits we can predict given the meaning: U(W | V) = I(W;V) H(W) .The mutual information I(W;V) is upper-bounded by the language’s entropy H(W), so the uncertainty coefficient is between zero and one.8 For the CELEX data, we give the uncertainty coefficients with and without conditioning on part of speech in Table 1. By comparing results with and without conditioning on grammatical category, we see the importance of controlling for known factors of systematicity. As expected, all systematicity (mutual information) results are smaller when we condition on part of speech. After conditioning, systematicity remains present, though. In English, we can guess about 3.25% of the bits encoding the phone sequence, given the meaning. In Dutch and German, these quantities are higher. The effect size of systematicity in these languages, though, is small. 5.2 Broadly multilingual analysis On the larger set of languages in NorthEuraLex, we see that 87 of the 106 languages have statistically significant systematicity (p < 0.05), after Benjamini–Hochberg (1995) corrections. When we control for grammatical classes (I(W;V | POS)), we still get significant systematicity across languages (p < 10−3). A per-language analysis, though, only finds statistical significance for 17 of them, after Benjamini–Hochberg (1995) corrections. This evinces the importance of conditioning on grammatical category; without doing so, we would find a spurious result due to crafted, mor6 If markers of subclasses within a given part of speech are frequent, these may also emerge. 7 Our implementation is available at https://github. com/tpimentelms/meaning2form. 8 Because of our estimation, it may be less than zero. 1757 Systematicity Systematicity controlling for POS tags Language H(W) I(W;V) U(W | V) Cohen’s d I(W;V | POS) U(W | V;POS) Cohen’s d English 3.401 0.110 3.24% 0.175 0.084 2.50% 0.133 German 3.195 0.168 5.26% 0.221 0.154 4.84% 0.203 Dutch 3.245 0.156 4.82% 0.222 0.089 2.84% 0.123 Table 1: Mutual information (in bits per phone), uncertainty coefficients, and Cohen’s effect size results for CELEX. Per-phone word–form entropy added for comparison. All mutual information values are statistically significant (p < 10−5), as tested with a permutation test with 105 permutations. 0.20 0.15 0.10 0.05 0.00 0.05 0.10 0.15 Mutual Information (Bits per phone) 0 20 Density I(W; V) I(W; V POS) 4 2 0 2 4 Uncertainty Coefficient (%) 0.0 0.5 Density U(W; V) U(W; V POS) Figure 2: Mutual information and uncertainty coefficients for each language of NorthEuraLex. phological systematicity. We present kernel density estimates for these results in Figure 2 and give full results in Appendix A. Across all languages, the average uncertainty coefficient was 1.37% (Cohen’s d 0.1936). When controlling for grammatical classes, though, it was only 0.2% (Cohen’s d 0.0287). There were only 970 concepts with corresponding WORD2VEC representations in this dataset, and our language models easily overfit when conditioned on these. As we optimize the used number of PCA components (d) for these word embeddings, we can check its ‘optimum’ size. The average d across NorthEuraLex languages was only ≈22, while on CELEX it was ≈153. This might imply that the model couldn’t find systematicity in some languages due to the dataset’s small size—models were too prone to overfitting. 5.3 Fantastic phonesthemes and where to find them As a phonestheme is, by definition, a sequence of phones that suggest a particular meaning, we expect them to have higher mutual information values when compared to other k-grams in the lexicon— measured in bits per phone. To identify that a prefix of length k, w≤k, is a phonestheme, we compare it to all such prefixes, being interested in the mutual information I(W≤k,V). For each prefix in our dataset, we compute the average mutual information over all n words it appears in.We then sample 105 other sets of n words and get their average mutual information. Each prefix is identified as a phonestheme with a p-value of r 105 , where r is how many comparison where it has a lower systematicity than the random sets.9 Table 2 shows identified phonesthemes for English, Dutch, and German. Inspecting the German data, it is clear that some of these prefixes and affixes that we find are fossilized pieces of derivational etymology. Further, many of the endings in German are simply the verb ending -/@n/ with an additional preceding phone. Dutch and English are less patterned. While we find few examples in Dutch, all are extremely significant. It can be argued that two examples (-/@l/ and -/xt/) are not semantic markers but rather categorizing heads in the framework of distributed morphology (Marantz and Halle, 1993)—suggestions that the words are nouns. Further, in English, we find other examples of fossilized morphology, (/k@n/-) and (/In/-). In this sense, our found phonesthemes are related to another class of restrictedapplication subword: bound morphemes (Bloomfield, 1933; Aronoff, 1976; Spencer, 1991), which carry known meaning and cannot occur alone. From the list of English prefix phonesthemes we present here, all but /In/- and /k@n/- find support in the literature (Hutchins, 1998; Otis and Sagi, 2008; Gutierrez et al., 2016; Liu et al., 2018). Furthermore, an interesting case is the suffix -/mp/, which is identified with a high confidence. This might be picking up on phonesthemes -/ump/ and -/amp/ 9 While this explanation is specific to prefixes, we straightforwardly applied this to suffixes by reversing the word forms— e.g. ‘banana’ 7→‘ananab’. 1758 Language Phonestheme Count Examples p-value Dutch /sx/110 schelp, schild, schot, shacht, schaar <0.00001 -/@l/ 124 kegel, nevel, beitel, vleugel, zetel <0.00001 -/xt/ 42 beicht, nacht, vocht, plicht, licht <0.00001 -/Op/ 21 stop, shop, drop, top, bob 0.00068 English /In/33 infidel, intellect, institute, enigma, interim <0.00001 /sl/59 slop, slough, sluice, slim, slush <0.00001 -/kt/ 36 aspect, object, fact, viaduct, tact 0.00001 -/m@/ 32 panorama, asthma, trachoma, eczema, magma 0.00002 -/mp/ 44 stump, cramp, pump, clamp, lump 0.00003 -/@m/ 62 millennium, amalgam, paroxysm, pogrom, jetsam 0.00007 /fl/64 flaw, flake, fluff, flail, flash 0.00009 /bV/35 bum, bunch, bunk, butt, buck 0.00013 -/Qp/ 23 hop, strop, plop, pop, flop 0.00032 /gl/28 gleam, gloom, glaze, glee, glum 0.00046 /sn/38 sneak, snide, snaffle, snout, snook 0.00077 -/n@/ 34 henna, savanna, fauna, alumna, angina 0.00102 -/æg/ 23 swag, shag, bag, mag, gag 0.00107 /sw/43 swamp, swoon, swish, swoop, swig 0.00112 /sI/78 silica, secede, silicone, secrete, cereal 0.00198 -/k@/ 22 japonica, yucca, mica, hookah, circa 0.00217 /sE/34 shell, sheriff, shelf, chevron, shed 0.00217 /k@n/31 conceal, condemn, concert, construe, continue 0.00429 German /g@/69 geschehen, Gebiet, gering, Geruecht, gesinnt <0.00001 -/@ln/ 58 rascheln, rumpeln, tummeln, torkeln, mogeln <0.00001 -/ln/ 58 rascheln, rumpeln, tummeln, torkeln, mogeln <0.00001 -/@n/ 801 goennen, saeen, besuchen, giessen, streiten <0.00001 /In/34 Indiz, indes, intern, innehaben, innerhalb <0.00001 /b@/32 bestaetigen, beweisen, bewerkstelligen, betrachten, beschwichtigen <0.00001 -/p@/ 36 Lampe, Klappe, Kappe, Raupe, Wespe 0.00002 -/S@n/ 24 dreschen, wischen, mischen, rauschen, lutschen 0.00002 /Sl/39 schlagen, schlingen, schleifen, schleudern, schluepfen 0.00015 -/k@n/ 76 backen, strecken, spucken, druecken, schmecken 0.00016 -/ts@n/ 47 blitzen, schwatzen, duzen, stanzen, einschmelzen 0.00026 -/l@n/ 41 quellen, prellen, johlen, bruellen, eilen 0.00029 /ain/25 einstehen, eintreiben, einmuenden, einfinden, eingedenk 0.00033 -/Ix/ 59 reich, weich, bleich, gleich, Laich 0.00033 /Sn/22 schnitzen, schnalzen, schnappen, schnurren, schneiden 0.00036 /Sm/23 schmieren, schmieden, schmunzeln, schmoren, schmeissen 0.00077 /Sv/38 schweben, schweifen, schwirren, schwellen, schwimmen 0.00124 -/r@n/ 62 servieren, wehren, sparen, kapieren, hantieren 0.00247 /br/35 brausen, bremsen, brechen, brennen, brauen 0.00258 -/t@/ 86 Paste, Quote, Kette, vierte, Sorte 0.00281 -/n@/ 66 Traene, Tonne, Laterne, Fahne, Spinne 0.00354 -/@rn/ 70 schillern, schimmern, kapern, knattern, rattern 0.00365 Table 2: Discovered phonesthemes, represented as IPA, in Dutch, English, and German, sorted p-values according to the Benjamini–Hochberg (1995) correction. Count refers to the number of types in our corpus with that affix. 1759 from Hutchins (1998)’s list. 5.4 Correlation with human judgments As a final, albeit weak, validation of our model, we consider how well our computed systematicity compares to human judgments (Hutchins, 1998; Gutierrez et al., 2016; Liu et al., 2018). We turn to the survey data of Liu et al. (2018), in which workers on Amazon Mechanical Turk gave a 1-to5 judgment of how well a word’s form suited its meaning. For each of their model’s top 15 predicted phonesthemes and 15 random non-predicted phonesthemes, the authors chose five words containing the prefix for workers to evaluate.10 Comparing these judgments to our model-computed estimates of mutual information I(W<2;V), we find a weak, positive Spearman’s rank correlation (ρ = 0.352 with p = 0.03). This shows that prefixes for which we find higher systematicity—according to mutual information—also tend to have higher humanjudged systematicity. 6 Conclusion We have revisited the linguistic question of the arbitrariness—and the systematicity—of the sign. We have framed the question on informationtheoretic grounds, estimating entropies by state-ofthe-art neural language modeling. We find evidence in 87 of 106 languages for a significant systematic pattern between form and meaning, reducing approximately 5% of the phone-sequence uncertainty of German lexicons and 2.5% in English and Dutch, when controlling for part of speech. We have identified meaningful phonesthemes according to our operationalization, and we have good precision—all but two of our English phonesthemes are attested in prior work. An avenue for future work is connecting our discovered phonesthemes to putative meanings, as done by Abramova et al. (2013) and Abramova and Fern´andez (2016). The low uncertainty reduction suggests that the lexicon is still largely arbitrary. According to the information-theoretic perspective of Monaghan et al. (2011), an optimal lexicon has an arbitrary mapping between form and meaning. If this is true, then a large amount of these benefits do accrue to language; that is, given the small degree of systematicity, we lose little of the benefit. 10 Of the 150 judgements in their dataset, only 35 were in ours as well, so we restrict our analysis to them. This is a weak signal for our model’s validity. Acknowledgments The authors would like to thank Mark Dingemanse, Adina Williams, and the anonymous reviewers for valuable insights and useful suggestions. References ˚Asa Abelin. 1999. Studies in sound symbolism. Ph.D. thesis, Department of Linguistics, G¨oteborg University G¨oteborg. Ekaterina Abramova and Raquel Fern´andez. 2016. Questioning arbitrariness in language: a data-driven study of conventional iconicity. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 343– 352, San Diego, California. Association for Computational Linguistics. Ekaterina Abramova, Raquel Fern´andez, and Federico Sangati. 2013. Automatic labeling of phonesthemic senses. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 35. Mark Aronoff. 1976. Word formation in generative grammar. Linguistic Inquiry Monographs Cambridge, Mass., 1:1–134. R Harald Baayen, Richard Piepenbrock, and Leon Gulikers. 1995. The CELEX2 lexical database (release 2); LDC96L14. Distributed by the Linguistic Data Consortium, University of Pennsylvania, web download. Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological), 57(1):289–300. Benjamin K Bergen. 2004. The psychological reality of phonaesthemes. Language, 80(2):290–311. Dami´an E. Blasi, Søren Wichmann, Harald Hammarstr¨om, Peter F. Stadler, and Morten H. Christiansen. 2016. Sound–meaning association biases evidenced across thousands of languages. Proceedings of the National Academy of Sciences, 113(39):10818–10823. Leonard Bloomfield. 1933. Language. Holt, Rinehart and Winston. Robert A Blust. 1988. Austronesian root theory: An essay on the limits of morphology, volume 19. John Benjamins Publishing. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. 1760 Dwight Bolinger. 2014. Language-the loaded weapon: The use and abuse of language today. Routledge. Dwight L Bolinger. 1949. The sign is not arbitrary. Thesaurus, 1(1):52–62. Peter F. Brown, Vincent J. Della Pietra, Robert L. Mercer, Stephen A. Della Pietra, and Jennifer C. Lai. 1992. An estimate of an upper bound for the entropy of English. Comput. Linguist., 18(1):31–40. Andy Clark. 1998. Magic words: how language augments human computation, pages 162–183. Cambridge University Press. Silviu Cucerzan and David Yarowsky. 2003. Minimally supervised induction of grammatical gender. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. Isabelle Dautriche, Kyle Mahowald, Edward Gibson, and Steven T. Piantadosi. 2017. Wordform similarity increases with semantic similarity: An analysis of 100 languages. Cognitive Science, 41(8):2149– 2169. Johannes Dellert and Gerhard J¨ager. 2017. NorthEuraLex (version 0.9). Eberhard-Karls University T¨ubingen: T¨ubingen. Mark Dingemanse. 2012. Advances in the crosslinguistic study of ideophones. Language and Linguistics Compass, 6(10):654–672. Mark Dingemanse. 2018. Redrawing the margins of language: Lessons from research on ideophones. Glossa: A Journal of General Linguistics, 3(1). Mark Dingemanse, Dami´an E. Blasi, Gary Lupyan, Morten H. Christiansen, and Padraic Monaghan. 2015. Arbitrariness, iconicity, and systematicity in language. Trends in Cognitive Sciences, 19(10):603 – 615. Umberto Eco. 1995. The Search for the Perfect Language (The Making of Europe). Wiley-Blackwell. John Rupert Firth. 1930. Speech [reprinted in The Tongues of Men & Speech, 1964]. J.R. Firth. 1964. The tongues of men, and Speech. Oxford University Press. Michael Gasser, Nitya Sethuraman, and Stephen Hockema. 2005. Iconicity in expressives: An empirical investigation. Experimental and empirical methods. Stanford, CA: CSLI Publications. E. Dario Gutierrez, Roger Levy, and Benjamin Bergen. 2016. Finding non-arbitrary form-meaning systematicity using string-metric learning for kernel regression. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2379–2388. Association for Computational Linguistics. Shoko Hamano. 1998. The Sound-Symbolic System of Japanese. ERIC. Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4):665–695. C. F. Hockett. 1960. The Origin of Speech. Scientific American, 203:88–96. Sharon Suzanne Hutchins. 1998. The psychological reality, variability, and compositionality of English phonesthemes. Ph.D. thesis, Emory University. Kent Johnson. 2004. On the systematicity of language and thought. Journal of Philosophy, 101(3):111– 139. Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 284–294. Association for Computational Linguistics. Johann-Mattis List, Michael Cysouw, and Robert Forkel. 2016. Concepticon: A resource for the linking of concept lists. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 2393–2400, Portoroˇz, Slovenia. European Language Resources Association (ELRA). Nelson F. Liu, Gina-Anne Levow, and Noah A. Smith. 2018. Discovering phonesthemes with sparse regularization. In Proceedings of the Second Workshop on Subword/Character LEvel Models, pages 49–54. Association for Computational Linguistics. Gary Lupyan and Sharon L Thompson-Schill. 2012. The evocative power of words: activation of concepts by verbal and nonverbal means. Journal of experimental psychology. General, 141(1):170–186. Margaret Magnus. 2000. What’s in a Word? Evidence for Phonosemantics. Ph.D. thesis, Norwegian University of Science and Technology. Alec Marantz and Morris Halle. 1993. Distributed morphology and the pieces of inflection. The view from Building, 20:1–52. Hans Marchand. 1959. Phonetic symbolism in english wordformation. Indogermanische Forschungen, 64:146. Andr´e Martinet. 1949. La double articulation linguistique. Travaux du Cercle linguistique de Copenhague, 5:30–37. Keith Michael McCune. 1985. The internal structure of Indonesian roots. Ph.D. thesis, University of Michigan. 1761 Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. An analysis of neural language modeling at multiple scales. CoRR, abs/1803.08240. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association. Padraic Monaghan, Morten H Christiansen, and Nick Chater. 2007. The phonological-distributional coherence hypothesis: Cross-linguistic evidence in language acquisition. Cognitive psychology, 55(4):259–305. Padraic Monaghan, Morten H. Christiansen, and Stanka A. Fitneva. 2011. The arbitrariness of the sign: Learning advantages from the structure of the vocabulary. Journal of Experimental Psychology: General, 140(3):325–347. Padraic Monaghan, Gary Lupyan, and Morten Christiansen. 2014a. The systematicity of the sign: Modeling activation of semantic attributes from nonwords. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 36. Padraic Monaghan, Richard C. Shillcock, Morten H. Christiansen, and Simon Kirby. 2014b. How arbitrary is language? Philosophical Transactions of the Royal Society B: Biological Sciences, 369:20130299. Katya Otis and Eyal Sagi. 2008. Phonaesthemes: A corpus-based analysis. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 30. Steven T. Piantadosi, Harry Tily, and Edward Gibson. 2011. Word lengths are optimized for efficient communication. Proceedings of the National Academy of Sciences, 108(9):3526–3529. Geoffrey K Pullum and Barbara C Scholz. 2007. Systematicity and natural language syntax. Croatian Journal of Philosophy, 7(21):375–402. M.S. Ratliff. 1992. Meaningful Tone: A Study of Tonal Morphology in Compounds, Form Classes, and Expressive Phrases in White Hmong. Monograph Series on Southeast Asia, Special Report (1992) Series. Northern Illinois University, Center for Southeast Asian Studies. Richard Rhodes. 1981. On the semantics of the Ojibwa verbs of breaking. Algonquian Papers-Archive, 12. Ferdinand de Saussure. 1912. Adjectifs indoeurop´eens du type caecus “aveugle”. In Festschrift Vilhelm Thomsen zur Vollendung des siebzigsten Lebensjahres am 25. Januar 1912, dargebracht von Freunden und Sch¨ulern, pages 202–206. Leipzig: Otto Harrassowitz. Reprinted in Saussure 1922: 595–599. Ferdinand de Saussure. 1916. Course in General Linguistics. Columbia University Press. English edition of June 2011, based on the 1959 translation by Wade Baskin. Richard Shillcock, Simon Kirby, Scott McDonald, and Chris Brew. 2001. Filled pauses and their status in the mental lexicon. In ISCA Tutorial and Research Workshop (ITRW) on Disfluency in Spontaneous Speech, pages 53–56. Jasper Snoek, Hugo Larochelle, and Ryan P Adams. 2012. Practical Bayesian optimization of machine learning algorithms. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 2951–2959. Curran Associates, Inc. Andrew Spencer. 1991. Morphological theory: An introduction to word structure in generative grammar, volume 2. Basil Blackwell Oxford. Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. LSTM neural networks for language modeling. In Thirteenth annual conference of the international speech communication association. John Wallis. 1699. Grammar of the English language. Sren Wichmann, Eric W. Holman, and Cecil H. Brown. 2018. The ASJP database (version 18). John Wilkins. 1668. An essay towards a real character, and a philosophical language. Gellibrand. 1762 A NorthEuraLex Results Language H(W) U(W | V) U(W | V;POS) abk 2.8432 1.76% -0.26% ady 3.2988 2.00% 0.50% ain 3.0135 0.54% -0.50% ale 2.5990 1.38% 0.47% arb 3.0872 1.74% -0.07% ava 2.8161 2.55% -0.22% azj 3.0713 1.68% 1.42% bak 3.0652 2.17% 0.44% bel 3.1212 1.48% -0.37% ben 3.2638 1.69% 0.65% bre 3.1430 0.57% 1.43% bsk 3.4114 0.17% 0.10% bua 2.8739 1.94% 0.02% bul 3.2150 1.63% 0.19% cat 3.1536 1.75% 0.11% ces 3.1182 1.74% 0.19% che 3.2381 -1.60% 0.62% chv 3.1185 0.43% 0.91% ckt 2.8968 1.60% 0.47% cym 3.2752 1.42% 0.86% dan 3.2458 0.66% 0.57% dar 3.2124 1.93% -0.37% ddo 3.2711 2.15% -0.04% deu 2.9596 1.27% 0.90% ekk 2.9575 0.69% -1.55% ell 2.9141 0.15% 0.89% enf 3.0470 3.03% 0.80% eng 3.2126 0.88% 0.70% ess 2.7369 1.42% 0.29% eus 3.0070 0.71% -0.57% evn 2.8434 1.34% 0.64% fin 2.8996 1.32% 0.23% fra 3.3423 1.17% -0.32% gld 2.9055 2.31% 0.26% gle 3.1450 0.51% -0.36% heb 3.1407 1.26% 0.79% hin 3.0240 1.11% 0.68% hrv 3.0776 2.04% 0.43% hun 3.2520 0.44% 0.09% hye 3.3416 1.84% 0.38% isl 3.0386 0.50% -0.71% ita 2.8409 2.18% 0.57% itl 3.4332 1.96% 0.27% jpn 2.8157 1.72% 0.53% kal 2.5255 1.34% 0.02% Language H(W) U(W | V) U(W | V;POS) kan 2.8412 0.23% 0.40% kat 3.1831 2.04% 1.06% kaz 3.0815 2.19% -0.13% kca 2.8779 2.93% 1.40% ket 3.3202 0.72% 0.30% khk 2.9746 0.57% 0.45% kmr 3.1292 2.22% 0.26% koi 3.2419 0.57% 0.25% kor 3.1600 1.66% 0.40% kpv 3.1685 1.71% 0.48% krl 2.8655 2.19% -0.71% lat 2.8102 1.36% 0.01% lav 2.8679 0.60% -0.10% lbe 3.0239 0.94% -0.41% lez 3.3717 3.34% 0.24% lit 2.8086 1.45% -1.33% liv 3.0825 1.11% -1.34% mal 2.6773 1.90% 0.38% mdf 2.9186 1.24% -0.07% mhr 2.9952 1.08% 1.20% mnc 2.5750 3.05% -0.03% mns 2.8001 1.03% 0.18% mrj 3.1771 1.74% 0.49% myv 2.8785 1.61% 0.75% nio 2.8985 1.96% 1.46% niv 3.4408 1.46% 0.45% nld 3.0407 1.56% -0.40% nor 3.0315 0.68% 0.21% olo 3.0151 1.38% 0.49% oss 3.2484 1.42% -0.45% pbu 3.2840 1.58% -0.05% pes 2.8443 1.63% -0.17% pol 3.3167 1.65% 0.27% por 3.2509 1.19% 0.10% ron 3.3667 0.43% -0.99% rus 3.3538 1.88% 0.17% sah 3.0002 -1.29% -0.37% sel 2.8460 1.86% 0.76% sjd 2.7920 -0.05% 0.30% slk 3.1928 1.27% 0.46% slv 2.8685 2.13% -0.40% sma 2.5011 2.02% -0.14% sme 2.6746 2.10% -0.17% smj 2.5975 0.86% -0.52% smn 2.9281 1.50% 0.22% sms 2.7608 1.06% -0.56% 1763 Language H(W) U(W | V) U(W | V;POS) spa 2.9777 1.91% 2.07% sqi 3.3473 0.22% 0.69% swe 2.8600 0.64% -0.44% tam 2.6851 -0.19% -0.63% tat 3.1365 1.50% 0.17% tel 2.8458 0.06% -1.34% tur 2.9646 1.93% 0.81% udm 3.1042 2.72% 0.37% ukr 3.1135 1.46% 0.48% uzn 3.0624 1.26% 0.13% vep 3.2055 2.53% 1.21% xal 3.2090 1.50% 0.51% ykg 2.9680 1.79% 0.65% yrk 2.8453 1.97% 0.49% yux 3.0704 -0.29% -0.18% Table 3: NorthEuraLex languages and p-values of systematicity. Bold entries are statistically significant at p < 0.05, after Benjamini–Hochberg (1995) correction. Language H(W) U(W;V) U(W;V | POS) abk 2.8432 0.0500 -0.0071 ady 3.2988 0.0661 0.0158 ain 3.0135 0.0161 -0.0150 ale 2.5990 0.0358 0.0117 arb 3.0872 0.0538 -0.0020 ava 2.8161 0.0717 -0.0059 azj 3.0713 0.0517 0.0429 bak 3.0652 0.0666 0.0130 bel 3.1212 0.0462 -0.0110 ben 3.2638 0.0553 0.0206 bre 3.1430 0.0181 0.0444 bsk 3.4114 0.0057 0.0034 bua 2.8739 0.0558 0.0007 bul 3.2150 0.0523 0.0060 cat 3.1536 0.0550 0.0032 ces 3.1182 0.0543 0.0055 che 3.2381 -0.0519 0.0194 chv 3.1185 0.0135 0.0282 ckt 2.8968 0.0464 0.0131 cym 3.2752 0.0464 0.0275 dan 3.2458 0.0214 0.0183 dar 3.2124 0.0621 -0.0114 ddo 3.2711 0.0702 -0.0013 deu 2.9596 0.0377 0.0261 ekk 2.9575 0.0203 -0.0438 ell 2.9141 0.0044 0.0252 enf 3.0470 0.0923 0.0233 Language H(W) U(W;V) U(W;V | POS) eng 3.2126 0.0284 0.0226 ess 2.7369 0.0388 0.0076 eus 3.0070 0.0214 -0.0166 evn 2.8434 0.0382 0.0175 fin 2.8996 0.0384 0.0063 fra 3.3423 0.0392 -0.0104 gld 2.9055 0.0670 0.0073 gle 3.1450 0.0161 -0.0111 heb 3.1407 0.0396 0.0243 hin 3.0240 0.0336 0.0200 hrv 3.0776 0.0627 0.0127 hun 3.2520 0.0143 0.0029 hye 3.3416 0.0615 0.0125 isl 3.0386 0.0153 -0.0208 ita 2.8409 0.0618 0.0153 itl 3.4332 0.0674 0.0090 jpn 2.8157 0.0485 0.0141 kal 2.5255 0.0340 0.0005 kan 2.8412 0.0066 0.0111 kat 3.1831 0.0649 0.0325 kaz 3.0815 0.0676 -0.0039 kca 2.8779 0.0843 0.0387 ket 3.3202 0.0240 0.0100 khk 2.9746 0.0170 0.0128 kmr 3.1292 0.0694 0.0078 koi 3.2419 0.0185 0.0077 kor 3.1600 0.0524 0.0122 kpv 3.1685 0.0542 0.0148 krl 2.8655 0.0629 -0.0195 lat 2.8102 0.0381 0.0002 lav 2.8679 0.0172 -0.0027 lbe 3.0239 0.0285 -0.0119 lez 3.3717 0.1126 0.0077 lit 2.8086 0.0409 -0.0354 liv 3.0825 0.0342 -0.0401 mal 2.6773 0.0508 0.0097 mdf 2.9186 0.0363 -0.0021 mhr 2.9952 0.0325 0.0348 mnc 2.5750 0.0785 -0.0006 mns 2.8001 0.0289 0.0048 mrj 3.1771 0.0552 0.0151 myv 2.8785 0.0463 0.0208 nio 2.8985 0.0569 0.0408 niv 3.4408 0.0504 0.0147 nld 3.0407 0.0474 -0.0118 nor 3.0315 0.0206 0.0061 1764 Language H(W) U(W;V) U(W;V | POS) olo 3.0151 0.0415 0.0143 oss 3.2484 0.0460 -0.0140 pbu 3.2840 0.0518 -0.0017 pes 2.8443 0.0463 -0.0046 pol 3.3167 0.0547 0.0086 por 3.2509 0.0387 0.0031 ron 3.3667 0.0144 -0.0322 rus 3.3538 0.0631 0.0056 sah 3.0002 -0.0388 -0.0111 sel 2.8460 0.0528 0.0207 sjd 2.7920 -0.0013 0.0082 slk 3.1928 0.0406 0.0139 slv 2.8685 0.0611 -0.0111 sma 2.5011 0.0505 -0.0033 sme 2.6746 0.0562 -0.0043 smj 2.5975 0.0223 -0.0129 smn 2.9281 0.0439 0.0061 sms 2.7608 0.0292 -0.0149 spa 2.9777 0.0568 0.0599 sqi 3.3473 0.0073 0.0226 swe 2.8600 0.0182 -0.0124 tam 2.6851 -0.0050 -0.0167 tat 3.1365 0.0471 0.0050 tel 2.8458 0.0017 -0.0374 tur 2.9646 0.0574 0.0234 udm 3.1042 0.0843 0.0110 ukr 3.1135 0.0456 0.0142 uzn 3.0624 0.0386 0.0039 vep 3.2055 0.0812 0.0374 xal 3.2090 0.0482 0.0156 ykg 2.9680 0.0532 0.0186 yrk 2.8453 0.0561 0.0133 yux 3.0704 -0.0088 -0.0054 Table 4: NorthEuraLex languages and their uncertainty coefficients. Bold entries are statistically significant at p < 0.05, after Benjamini–Hochberg (1995) correction.
2019
171
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1765–1774 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1765 Learning Morphosyntactic Analyzers from the Bible via Iterative Annotation Projection across 26 Languages Garrett Nicolai and David Yarowsky Center for Language and Speech Processing Johns Hopkins University [email protected] [email protected] Abstract A large percentage of computational tools are concentrated in a very small subset of the planet’s languages. Compounding the issue, many languages lack the high-quality linguistic annotation necessary for the construction of such tools with current machine learning methods. In this paper, we address both issues simultaneously: leveraging the high accuracy of English taggers and parsers, we project morphological information onto translations of the Bible in 26 varied test languages. Using an iterative discovery, constraint, and training process, we build inflectional lexica in the target languages. Through a combination of iteration, ensembling, and reranking, we see double-digit relative error reductions in lemmatization and morphological analysis over a strong initial system. 1 Introduction The computational processing of languages such as English and Arabic has undeniably benefited from the construction of annotated datasets such as treebanks and morphological databases. Unfortunately, the construction of even modestly-sized treebanks is very expensive, requiring hundreds of hours of expert annotation. The construction of computational tools is in turn limited by a lack of supervised training data. One alternative to hand-annotating lowresource languages (LRL) involves using existing tools for a high-resource language (HRL), such as English, and projecting these annotations to the LRL across a parallel corpus. Consider the example in Figure 1: the English sentence is POStagged and dependency parsed by tools that have been trained on large amounts of high-quality data. The sentence is word-aligned to its French translation, and the POS tags and dependency relations follow the alignments to annotate the There EX are VBP doubtless RB many JJ different JJ languages NNS in IN the DET world NN . . expl amod amod case det nmod Il EX y a VBP qu' il se rencontre , tant JJ de JJ divers JJ sons NNS dans IN le DT monde NN . . expl amod amod amod case det nmod advmod nsubj nsubj Figure 1: Projection of POS tags and dependency parse from English to French. Black arrows demonstrate leftto-right dependency relations, while red diamonds illustrate right-to-left dependency relations. French words. Note that the projection is not lossless: the aligner could not find a French translation of “doubtless”, and has thus been unable to project the RB tag or advmod relation into French. Parallel corpora are rare, and even when they do exist, they often only exist between specific pairs of languages. However, the documentation of a language often begins with the creation of several important documents, including a dictionary of key terms, and translations of religious texts. Thus, documents such as the Christian Bible are among the most translated documents in the world (Mayer and Cysouw, 2014). Furthermore, the Bible consists of short, numbered chapters and verses consisting of a small number of sentences. Although not parallel to the standard required in fields such as machine translation, the structure of the Bible means that different Bibles are approximately parallel across verses. We follow a tradition of projecting POS tags from a high-resource language onto a language with fewer available tools (Yarowsky et al., 2001; Fossum and Abney, 2005; Agi´c et al., 2015; Buys 1766 and Botha, 2016). Our contributions, however, lie on the level of morphology and morphosyntax. With no further resources in the target language than a Bible translation and a dictionary, we project English POS tags, dependency relations, and semantic labels across the alignment. Leveraging the alignment and a collaboration of annotations, we are able to hypothesize both a lemma and detailed morphosyntactic features for both inflected nouns and verbs. This information can then be used to inform the construction of morphological analyzers. We learn to identify morphosyntactic categories including plurality, temporality, and case over nouns and verbs in a test set of 26 diverse languages. By leveraging annotations across a series of alternative Bible translations, we are able to successfully identify lemmas and morphological features, obtaining further improvements from strategies such as ensembling and reranking. 2 Related Work Automatic morphological induction has had numerous contributions over the years. Here, we list the most relevant to this work, and distinguish this work from what has come before. The class of methods introduced by Yarowsky et al. (2001) are the most similar to the work described in this paper. Also beginning with aligned Bible data, they recover verbal lemmas by leveraging multi-lingual alignments. However, where they are only interested in recovering the lemma, we simultaneously induce detailed morphological features of the words in the target language, over a wider range of verbal and nominal morphology, and deploy a new set of machine learning techniques to do so. Futhermore, we significantly expand the languages included in our test set, from 3 to 26 typologically diverse languages, substantially increasing the range of morphosyntactic phenomena covered and assessed. Similarly, Fossum and Abney (2005) and Agi´c et al. (2015) exploit the parallel nature of the Bible to project POS tags and train taggers in the target languages, leveraging the signal from multiple languages to improve the tagger accuracy. We focus, instead, on the induction of detailed morphological categories. Soricut and Och (2015) induce morphological transformation rules in an unsupervised manner. While this is analogous to lemmatization, part of our motivation is to also produce detailed morphological features that might be useful to train lowresource taggers, or to more richly annotate morphologically sparse languages such as English. Buys and Botha (2016) train morphological taggers in morphologically rich languages from an English projection. However, their method is dependent upon an English corpus tagged with more morphologically aware tags than are typically produced by an off-the-shelf English POS tagger. We instead argue that much of this information is recoverable from syntactic and semantic parses, allowing us to use massively-parallel corpora such as the Bible. Kirov et al. (2017) notes the morphological sparsity of English, and reverses our setup, projecting morphologically rich tags from Czech into English. Rather than add another potentially noisy projection step (i.e., Czech to English to LRL), we instead leverage dependency and semantic parses to more richly tag English. In the area of contraint-based discovery, our methodology most closely resembles the constrained discovery systems of Lin et al. (2016) and particularly Upadhyay et al. (2018). Starting from a high-quality seed, a learning algorithm generalizes observed patterns, iteratively increasing the seed data with confident examples, while discarding examples that fail to pass certain heuristics. However, unlike previous work, we assume no gold seed annotations for our system - our seed is extracted exclusively from a noisy bitext word alignment. 3 Methods In this section, we describe our methods for inducing lemmas and morphological features pertaining to plurality, temporality, and case from aligned English-target Bibles. Our process is outlined in Figure 2. After annotating English Bibles for POS, dependency relations, and semantic roles, these observations are projected across an alignment to a target language. Candidate analyses are first discovered from the projection. These analyses are then constrained with a number of noise-reduction heuristics. Finally, inflection tools are trained on the candidates, and used to generate new hypotheses, and the process is repeated. 1767 English Bibles Project Annotated English Bibles Annotate LRL Bibles Aligned Discover Reanalyze Annotated LRL Bibles Constrain Inflection Pairs Train Constrained Inflection Pairs English Tools Learn Seq2Seq Rediscover Seq2Seq Model Figure 2: The discovery, constraint, and generation process. Beginning in the top-left, our method proceeds towards the lower-right corner, which forms an iterative cycle that can be repeated until convergence. 3.1 Tagging and Projection We begin with a series of 27 English Bible translations, each verse-aligned to at least one Bible in a target language. Many of these Bibles are based on translations that are hundreds of years old, and preserve archaic conventions for literary reasons. Unfortunately, modern NLP tools are usually trained on modern text data, and the presence of archaic linguistic forms can seriously degrade the quality of the annotation. Fortunately, many archaicisms in the Bible are older verbal inflections that follow a small set of consistent patterns: 2nd person verbs end in “-est” instead of a null affix, and 3rd person verbs end in “-eth” instead of “s” (i.e., “seest” and “believeth”). Before tagging and parsing, we normalize these forms, as well as other common archaic forms, such as “thou”, to their modern equivalents.1 The English Bibles are then lemmatized, POStagged, and syntactically and semantically parsed. POS tags are directly projected between aligned words in the source and target: if a word in English aligns with multiple target words, its annotations are projected to all of them. Conversely, if many English words align to a single target word, all of the annotations are projected onto the target word (for induction, each of these tags is given equal, reduced weight). Parses are similarly projected across the alignment, however unlike tags, parses are tuples containing a head, a relation, and a modifier (or a 1Although Bibles in other languages can also be written in older forms of the language, we leave target normalization to future work. Vorschriften Gebote Regeln Regel Vorschrift Gebot commandments commandment Figure 3: Projecting lemmas across alignments. Dashed lines can be eliminated with an edit-distance threshold. predicate and its arguments, for a semantic parse). Semantic parses behave similar to POS tags, and can be projected directly onto the target words. For syntactic parses, we project the relation onto the modifier, with a back pointer to the head. When working with a noisy alignment, such as are common in low-resource situations, it is possible that either the head or the modifier will not have an aligned translation in the target. If the modifier is not aligned, then the dependency relation is lost, such as is the case with “doubtless” in Figure 1. However, if the head is not aligned, the relation will still be projected onto the modifier. For our purposes, it is far more informative to know that a particular noun is a nominal subject, without knowing the verb, than to know that a verb has a subject, but not knowing what the subject is. 3.2 Lemma Discovery Although it is straightforward to project tags across an alignment, lemmas provide a more significant challenge. In this section, we describe our method of discovering lemmas that can later be used to train lemmatizers and morphological analyzers. Our lemma-induction approach is similar to that proposed by Yarowsky et al. (2001). Each English word forms a set with the target words with which it is aligned. Likewise, each English lemma forms a set with a group of target words. In the best case, the lemma set contains translations obtained from a bilingual dictionary, but if a dictionary is sparse, the set can be supplemented with the words aligned with the English lemma. These sets are then used to create a complete bipartite graph such that each edge corresponds with a candidate plural-singular word pair. Pairs that fail to meet an edit-distance threshold can be discarded. An example is shown in Figure 3. In this example, “commandments” has been 1768 aligned to three German words. Similarly, its lemma “commandment” has been aligned to three words. Completing the graph, we establish 9 candidate plural-singular pairs. However, some of these pairs, such as Regeln–Gebot are obviously false, and can be eliminated by an edit-distance threshold. Three pairs: Regeln–Regel, Gebote– Gebot, and Vorschriften–Vorschrift, remain. 3.3 Discovery of Morphological Features Lemmatization is itself an important application, as it can reduce data sparsity in inflectionallyrich languages. However, lemmas are only one of many available English annotations that may be able to benefit LRLs. In this section, we describe our methods for leveraging English syntactic and semantic parses to discover morphological features in our target languages. We consider three types of morphological information: nominal plurality, case, and temporality. Our first task is to identify, for a given noun, whether it is singular or plural. This information is readily available from the English POS, and we can thus create an inflection triple for each word tagged as a noun. This triple contains the inflected form, the hypothesized lemma, and a morphological tag identifying whether the noun is singular or plural. For example, “women” would produce the triple {women, woman, PL}. Although English does not, for the most part, decline its nouns, some case information has been translated into syntax: direct objects of verbs are in the accusative case, indirect objects are in the dative case, and nouns in prepositional phrases headed by “of” are in the genitive case. We approximate case by using a set of heuristics to translate a syntactic and semantic parse into a nominal case. With these heuristics, we are able to construct 12 nominative cases. Details concerning the rules used to construct the cases can be found in the Appendix.2 Finally, we extract verbal temporality. Namely, we extract whether a verb describes an event in the past, the present, or the future. While many languages further distinguish between other temporal actions such as completion or habituality, we restrict our work here to a tripartite extraction, as temporality features are ready available from an English POS tagger and a syntactic parse. 2These rules are by no means complete. They merely serve as an approximation to find some examples of the desired inflections. I baptize you with water . German Wasser Czech vodou Finnish vedellä Hungarian vízbe Russian водо́ю NOM(0.33) ACC(0.33) DAT(0.33) INST(1) AT+ESS(1) IN+ALL(1) INST(1) Figure 4: Forming a consensus from morphologicallyinformed languages. For every verb in our English Bible, we label it as either past, present, or future, and project the label onto the target language. Present and simple past verbs can be determined directly from the POS tags, while the perfect and future tenses are informed by the syntactic parse. Past participles (i.e., VBN), governed by a form of “have” is marked as past tense. Similarly, any past participle or infinitive governed by an auxiliary form of “will” or “shall” is marked as future tense. Rule-based systems, however, can be brittle, so we also investigate a secondary case signal: other target languages. The Bible is not only bilingually parallel – each translation is approximately parallel with every other language. Other languages than English may be better-suited to annotating the case of a target language. Consider the example in Figure 4. A dependency parser might inform us that “water” is a nominal modifier of “with”, but “with” is an ambiguous preposition, corresponding to both instrumental uses such as “He caught fish with a net”, and comitative uses “He sat down with his apostles”. We can observe which words in morphologically-rich languages have aligned to “water” in this verse. The case of these words can then be identified via a morphological dictionary. Morphological dictionaries are expensive to construct, but exist for a small number of languages; a consensus of high-resource languages can be used to inform the annotation in a lowresource one.3 In Figure 4, water is identified as clearly being used in the instrumental case in both Czech and Russian, and as in the essive and allative case in Finnish and Hungarian, respectively. German has a weaker signal, with an identical re3If the relevant word form is not present in any of the dictionaries, we back-off to the rule-based method. 1769 alization in three different cases. A simple voting scheme can annotate this use of “water” with the instrumental case. This annotation is then simply another piece of information to be projected across the alignment onto the target language. 3.4 Constraint To filter out noisy candidate pairs, we implement a series of sequential heuristics. These heuristics leverage the projected annotation to remove false positives while preserving as many of the true pairs as possible. We note that in the English translations of the Bible, if a word is present in its plural form, it is also often present as a singular. Furthermore, the singular form is regularly more frequent. Our first heuristic discards any pairs for which a proposed singular form occurs less frequently in the corpus than the plural. Secondly, we ensure that both inflected and lemma candidates have been regularly tagged as such. Polysemy, syncretism, and alignment errors mean that each word may have had many tags projected upon it. For example, a past tense verb may occasionally incorrectly receive a present tag – we do not want this infrequent mistake to identify false morphological phenomena. We compromise between a desire to remove noise, while preserving true candidates. For each word, we calculate the average frequency across all of its tags. A pair is kept if the desired tag occurs more frequently than average. Next, we discard any pair that demonstrates an unlikely character transformation. These transformations are discovered through the use of an unsupervised character aligner. The inflected forms are aligned with their discovered analysis. A pair is discarded if its normalized alignment likelihood does not fall within 2 standard deviations of the average likelihood. Consider the triple praised,praise,TAG. This inflection and lemma will pass an edit-distance threshold, but is much more likely to be a verbal inflection than a nominal one. The pair will be discarded if the task is plurality detection, as d→PL is an unlikely sequence. However, d→PST is very common, and thus the pair would be retained for temporality detection. Our preliminary nominal lemma detection is based solely on a singular/plural distinction, with no regards to case. It is possible that the hypothesized lemma is a singular form other than the citation form. To limit the singular forms in the discovered set to citation forms, we use the dependency parse and a target dictionary to restrict lemmas to nominal subjects that occur in the dictionary. 3.5 Generation After denoising our initial lexicon, we train models that learn to transform an inflected form into a citation form.4 After training, we attempt to analyze all verbs and nouns in the corpus. We then limit the hypotheses to high-confidence analyses, and pairs for which the predicted lemma appeared in the original target Bible. This restricted hypothesis list is then constrained via the heuristics in Section 3.4, and new models are learned. By augmenting the training data with hypotheses generated by the original models, we can exploit words that were in the original Bibles, but that our original induction methods missed, due to a missing alignment, a poor parse, or other noise. Development experiments demonstrated that one iteration of supplementing the training data was beneficial across our languages; subsequent iterations led to little further gain. 4 Experiments In this section, we describe the data and tools that we use to label our English Bibles and generate our morphological analyses. We also outline our evaluation metrics and describe our experimental results. Our Bible data is obtained from the corpus of Mayer and Cysouw (2014), which consists of verse-parallel Bible data across 591 languages, including 27 English Bibles. The English and target Bibles are aligned using the Berkeley aligner (Liang et al., 2006), and POS tagged and syntactically parsed using the Stanford NLP toolkit (Manning et al., 2014). We semantically parse the Bibles using the Deep Semantic Role Labeler (He et al., 2017). The alignment filter is implemented using M2M aligner (Jiampojamarn et al., 2007), and our dictionaries come from PanLex (Kamholz et al., 2014); statistics concerning dictionary and training sizes are contained in the appendix. 4For languages such as Arabic and Hebrew, where the citation form is not an attested word, we use the unmarked nominative singular form, instead. 1770 To evaluate the quality of the lexica that are produced, we extract gold validation and heldout sets from UniMorph (Kirov et al., 2018). Using the URIEL typological database (Littel et al., 2016), we limit the languages to those that include affixing verbal and nominal inflection, and that distinctly mark plurality and temporality.5 Our evaluation set consists of 26 languages belonging to several language families such as Semitic, Germanic, Italic, Slavic, Uralic, and Bantu. For each of these languages, we randomly select a validation set of 5000 instances, and 1000 heldout instances.6 For our declension experiments, we approximate case from a majority of higher-resource morphological dictionaries, as described in Section 3.3. For these experiments, the majority is obtained from the 10 largest nominal databases in our language set. Further information is included in the appendix. 4.1 Data We consider two learning algorithms for the generation phase of lexicon creation. The first is the bidirectional, hard-attentional RNN (RNN) over edit actions of Makarov and Clematide (2018). We use 100 hidden units on the input layer, and 200 on the encoder and decoder. We train the system using the ADADELTA optimizer for a maximum of 60 epochs, with 50% dropout. The second is DirecTL+ (DTL; Jiampojamarn et al., 2010), a semi-Markov model that learns transduction actions over sequences of characters; an n-gram size of 9 is used, with a joint 𝑛-gram size of 3. We further ensemble the two models by adding the normalized confidence scores produced by each model (Ensemble). We also consider a simple reranking (RR) scheme where any analysis with a lemma appearing in a dictionary has its confidence score incremented by the score of the best original hypothesis. In this way, forms that appear in the dictionary appear at the top of the list, in the same order as they were generated by the original model. We evaluate against two simple baselines that provide estimates of the difficulty of the task. The first baseline simply produces the inflected form 5Of our languages, six do not contain declension information in UniMorph. For these languages, the declension models will be identical to the plurality ones. 6Several of the UniMorph corpora contain fewer than 6000 suitable inflection-lemma pairs; in these cases, the size of the validation set is adjusted accordingly. as the lemma (Identity). The second baseline compares an inflected form with every citation form in a dictionary, and identifies the lemma as the citation form with the lowest edit distance from the inflected form (DictED). For morphological analysis, both baselines return the most common inflectional class from the training data. All systems are evaluated on accuracy@1, accuracy@5, and accuracy@50. Accuracy@𝑛rewards a system if it returns one of the correct solutions in its first 𝑛 predictions. While we focus our analysis on the accuracy@1, containing the correct solution in an 𝑛-best list can also be desirable when recall is valued more highly than precision. 4.2 Singularization Morphological analysis produces a lemma and bundle of inflectional features, given an inflected wordform. In our first set of experiments, we investigate a special case of analysis: singularization. By focusing on singularization, we can establish which of our filtering heuristics are effective in a task where we can be relatively certain that the lemma exists somewhere in the text. In these experiments, we sequentially accumulate the heuristic filters described in Section 3.4, beginning with the plural-singular pairs hypothesized by our dictionary-independent lemma extraction. The average singularization accuracy over all 26 languages is detailed in Table 1. We see that DirecTL+ and the RNN behave very differently when the training data is filtered. DirecTL+ improves marginally for each successive filter. Contrarily, the morphological filter, in particular, leads to a decrease in accuracy for the RNN, while all of the filters sharply limit the number of correct candidates that appear lower in the list. Some of this decrease can be attributed to smaller training data, and most of the loss is recovered via a second iteration, which increases the size of the training data. However, we hypothesize that the morphological filter, in particular, is too aggressive. It removes instances that contain infrequent transformations that allow the RNN to produce correct candidates further down the list. Our systems are trained exclusively from Bible data, but are able to generalize well to modern terms with a number of different pluralizing strategies. For example, in German, even the projection baseline can correctly generalize affix deletion and umlaut: “Ämter”→“Amt” (department), as well 1771 System Projection +Lemma +Morph +Align +Dep +Dict I2 +RR Identity 9.1 9.1 9.1 9.1 9.1 9.1 9.1 9.1 DictED N/A N/A N/A N/A N/A 31.0 31.0 31.0 DTL@1 15.2 16.8 16.9 16.7 17.8 21.3 33.0 43.1 RNN@1 17.7 17.8 16.5 17.4 18.7 22.8 30.6 36.9 Ensemble@1 17.5 19.3 18.9 18.9 20.6 25.5 36.4 43.5 DTL@5 31.6 31.9 33.1 33.0 33.8 37.1 47.3 53.3 RNN@5 40.3 33.7 30.9 32.2 31.7 37.1 46.3 52.0 Ensemble@5 43.4 40.2 39.8 40.0 40.3 45.6 57.9 61.5 DTL@50 44.9 49.7 50.4 50.8 50.8 50.6 57.8 57.8 RNN@50 63.2 52.0 49.7 51.0 50.8 54.3 60.9 60.9 Ensemble@50 63.5 58.5 57.6 58.3 58.4 60.8 70.2 71.4 Table 1: Accumulative lemmatic recall in the top-1, top-5, and top-50 hypotheses. Projection does not filter training candidates, other than by edit distance. Lemma implements the lemma heuristic, Morph the morphological one, Align the alignment one, Dep the dependency parses, Dict the dictionary, I2 applies a second iteration, and RR reranks the target hypotheses. as null inflection: “Kochlöffel”→“Kochlöffel” (cooking spoon). Limiting the target candidates by case has a marked impact upon the systems. By removing false lemmas like the German genitive “Geistes” (of the spirit), the Hungarian inessive “temploban” (in the temple), and Danish definite forms l ike “skidet” (the boat), the systems are more likely to produce the citation form: German “*Ingenieurs”→“Ingenieur” (engineer); Hungarian “*gõzhajóban”→“gõzhajó” (steamboat); Danish “*rygradet”→“rygrad” (backbone). By removing these noisy forms, we see large gains; the lemmas returned by the Finnish, Hungarian, and Turkish system without noise reduction are correct less than 10% of the time, while filtering the data increases the accuracy to approximately 26%, 56%, and 70%, respectively. Supplementing the system with a second iteration strengthens the signal of correct inflection patterns, relatively weakening the effect of noise. For example, German nouns ending in “-ung“ are very likely to pluralize with an “-en” suffix, but the projection baseline discovers no correct “-ung” pairs. However, “-en” is a common plural suffix in German, and the systems systematically strip the “-en” from “-ungen” nouns, although often lower in the hypothesis list. These correct pairs become training examples in the second iteration, outnumbering noisy examples, and improving system accuracy. If we have access to a dictionary, simply choosing the singular form closest to the inflection provides a surprisingly strong baseline – indeed, our systems do not surpass this simple heuristic until we implement a second iteration. Noting that the dictionary and iteration process contribute significantly more than any of the filtering heuristics, we investigate moving the dictionary earlier in the pipeline. Instead of creating a lemma list from the words aligned with the English lemma, as in Section 3.2 we use a list of translations of the English lemma. By moving the dictionary to the “front-of-theline” in such a matter, we see astounding gains, with the @1 recall of the reranked ensemble improving to 58.5%. In our further experiments, we thus adopt the dictionary in the lemma extraction method. 4.3 Lemmatization Singularization is a simplified version of lemmatization, as it assumes that all input forms are in the plural. In our next experiments, we train models that take as input an inflected word form, and produce a morphological tuple containing a lemma and morphological features. We train separate models to annotate plurality, temporality, and case. In this section, we evaluate the quality of the lemmas produced by these systems, before evaluating the quality of the complete analyses in Section 4.4. Table 2 shows the accuracy of our nominal and verbal lemmatizers. In particular, verbal lemmatization appears to be a more difficult task than its nominal equivalent. Both baselines struggle to 1772 System Nouns Verbs I1 I2 +RR I1 I2 +RR Identity 9.6 9.6 9.6 2.8 2.8 2.8 DictED 34.8 34.8 34.8 18.6 18.6 18.6 DTL@1 44.8 48.1 59.3 46.3 49.4 50.6 RNN@1 45.7 47.6 55.7 47.6 51.5 49.5 Ens@1 51.0 51.0 57.5 51.1 52.8 53.7 DTL@5 61.0 63.8 68.9 59.6 60.9 63.6 RNN@5 55.0 55.1 61.1 58.4 60.5 62.0 Ens@5 66.6 64.8 71.1 65.0 65.9 68.7 DTL@50 71.1 74.7 74.7 68.2 70.0 70.0 RNN@50 71.2 68.9 68.8 70.6 69.2 69.2 Ens@50 78.7 77.2 78.4 74.8 75.2 75.9 Table 2: Average Lemmatization accuracy on nouns and verbs. I1 uses the dictionary-based lemma extraction, I2 implements a second iteration, RR adds a reranker to I2. produce the correct lemma – nouns are about 4 times as likely to observe null-inflection as verbs, and even plural nouns tend to drift significantly from their lemmas, to the point that another citation form has a smaller edit-distance. However, we note little difference between nouns and verbs for any of our systems - in fact, our verbal system prior to reranking is slightly better than the nominal system. Ensembling neural and traditional systems augments performance, The ensemble makes use of complementary information to improve over either the RNN or DTL, even when neither system correctly predicts the lemma as its top candidate. For example, DTL predicts the lemma of the Estonian “l˜opetagem” as “*l˜opemama”, while the RNN predicts “*l˜opetamine. Both predict the correct “l˜opetama” (to finish) in 2nd place, which is exploited by the ensemble system. Re-incorporating the dictionary back in as a reranking step also provides gains, particularly to nominal lemmatization. This is even true with very small dictionaries: although the NorthernSami and Zulu dictionaries both contain fewer than 5000 entries, North-Sami nominal lemmatization accuracy increase from 40 to 44 %, and Zulu from 38 to 40%. 4.4 Morphological Analysis In our next series of experiments, we consider not only the accuracy of the lemmas produced by our systems, but of the complete morphological analyses. The task of morphological analysis subsumes lemmatization: a correct analysis must find not only the correct lemma, but also the correct set of morphological features that transformed the lemma into the inflected form. Analyzing the same systems as in Section 3.2, we report the accuracy of complete analyses in Table 3. We note that with the exception of temporality, arriving at a consensus for the morphological tag is superior to deriving it from a simple heuristic. While the English signal is strong enough to recover some morphological information, perhaps unsurprisingly, the signal from languages that have maintained their nominal declension is stronger. Given enough languages, the signal is strong enough to overcome idiosyncratic properties of the languages individually. The heuristics that extract case from English can be confused by complex clauses. In the sentence “He ordered his soldiers to remove him from his midst” the soldiers are the nominative subject of the verb “remove”, but the dative object of the verb “order”. Relying on the dependency parse alone allows dative plurals such as the Polish “˙zołnierzom” (soldiers) to enter the training data erroneously tagged as a nominative plural. The model then incorrectly tags other words ending in “-om”, a distinctly dative suffix, as nominatives. Achieving a consensus from other languages correctly identifies the form as a dative, even though it is used as a subject. 4.5 Further Analysis In the previous sections, we averaged our results over 26 languages exhibiting various morphological phenomena. In this section, we provide a more nuanced investigation of the types of languages suited to our methods. We claim that the Bible is a suitable resource for learning the morphology of low-resource languages, but due to the necessity of gold morphological dictionaries, many of our evaluation languages cannot be considered low-resource. However, the only available resources we assume to exist are a translated Bible and a bilingual dictionary. By grouping languages by the size of their dictionaries, we can determine the impact that the size of the dictionary has on our methods, and extrapolate how they might work in a true low-resource scenario. Table 4 demonstrates how the dictionary size influences two steps in our method: lemma extraction, and reranking. We see that although the dictionary has some impact on the accuracy of trained lemmatizers, it is not the only contributing factor. The lan1773 System Plurality Temporality Case RB Maj I2 RR RB Maj I2 RR RB Maj I2 RR Identity 8.9 8.9 8.9 8.9 2.0 2.0 2.0 2.0 8.7 8.7 8.7 8.7 DictED 20.9 20.9 20.9 20.9 8.9 8.9 8.9 8.9 10.1 10.1 10.1 10.1 DTL@1 32.1 37.5 39.2 47.0 37.2 36.4 38.8 38.7 18.9 21.9 23.6 27.9 RNN@1 34.1 36.8 37.7 42.9 37.0 38.7 40.2 38.2 17.0 16.3 17.7 19.6 Ensemble@1 36.6 43.4 41.2 47.8 41.4 40.4 41.3 41.5 21.1 24.1 24.6 27.4 DTL@5 52.6 56.1 57.3 65.1 53.4 50.3 50.8 55.7 33.3 38.6 39.6 46.7 RNN@5 59.0 62.0 62.9 67.9 56.0 56.3 58.5 60.0 35.4 36.2 40.3 44.4 Ensemble@5 64.7 69.1 67.3 73.1 62.1 59.9 61.1 63.8 40.8 46.0 46.6 51.1 DTL@50 68.6 68.2 71.7 71.7 64.5 61.3 63.9 63.9 47.9 53.1 55.7 55.7 RNN@50 71.9 76.9 75.1 74.9 68.3 69.3 67.4 67.1 54.4 59.2 58.0 58.0 Ensemble@50 76.8 81.0 78.8 80.0 73.0 71.8 71.4 72.2 58.0 64.2 62.8 64.5 Table 3: Average Accuracy of morphological analysis for plurality detection, temporality detection, and case identification. RB denotes a system where case is hypothesized through rules, Maj denotes a majority consensus of other languages, I2 is a second iteration built on top of Maj, and RR applies a reranker to RR. #Entries Nouns Nouns +RR Verbs Verbs +RR <5K 48.7 52.1 24.1 24.4 5K-20K 38.0 41.2 35.9 38.1 20K-50K 52.5 63.4 62.3 63.0 >50K 57.4 64.5 62.3 63.0 Table 4: Average Lemmatization accuracy@1 on nouns and verbs of the ensemble system for varying dictionary sizes. guages with the smallest dictionaries perform approximately as well as larger groups on nominal lemmatization, only starting to degrade after dictionary reranking, which is to be expected. Verbal lemmatization, on the other hand, degrades much faster as the size of the available dictionary is reduced. However, we observe that the reranker – which is entirely dependent on the dictionary – has far less influence on verbs than nouns, even with a large dictionary. The size of the dictionary may be less of a factor than the types of morphology exhibited in the lower-resource languages. We next observe which languages are most suitable to our methods, by separating our results by linguistic family. Table 5 reports both the accuracy@1 and accuracy@50 for the reranked ensemble. Although our system can accurately lemmatize Bantu nouns, Bantu verbs prove much more difficult. The low accuracy on Bantu verbs appears to be at least partially responsible for the low verbal performance of LRL in Table 4. Secondly, we note that while our system struggles with Semitic and Bantu language families, our methods of projection and constraint are successful on other language families, even when their morphology differs significantly from English. We correctly lemmatize Uralic and BaltoFamily NN@1 NN@50 VB@1 VB@50 Armenian 63.0 85.1 37.7 72.1 Bantu 40.4 73.5 1.3 21.4 Hellenic 53.7 77.1 31.7 46.8 Turkic 36.9 62.8 40.5 81.3 Italic 44.1 57.1 33.0 56.3 Semitic 16.7 32.2 10.9 22.1 Uralic 58.1 80.9 51.5 78.6 Balto-Slavic 64.8 84.6 66.1 89.4 Germanic 71.6 93.6 78.1 94.0 Table 5: Average Lemmatization accuracy on nouns and verbs of the ensemble system for varying language Families. Slavic languages – languages with large case inventories – with high accuracy. Similarly, the verbal signal is strong enough to train accurate lemmatizers in languages with much more complex inflectional systems than English, such as the agglutinative Turkic and Uralic families. 5 Conclusion We have presented a method for learning morphosyntactic feature analyzers and lemmatizers from iterative annotation projection. Using no target-language training data, we successfully transferred multiple fine-grained annotations on 27 different English Bible editions to 26 diverse target languages. Using iterative discovery and robust ensembling of multiple high-performance morphological learning algorithms to yield standalone target language systems, we achieve doubledigit relative error reductions in both lemmatization and morphosyntactic feature analysis over a strong initial system, evaluated on modern test vocabulary in all 26 languages. 1774 References Željko Agi´c, Dirk Hovy, and Anders Søgaard. 2015. If all you have is a bit of the Bible: Learning POS taggers for truly low-resource languages. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 268–272. Jan Buys and Jan A. Botha. 2016. Cross-lingual morphological tagging for low-resource languages. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1954–1964, Berlin, Germany. Association for Computational Linguistics. Victoria Fossum and Steven Abney. 2005. Automatically inducing a part-of-speech tagger by projecting from multiple source languages across aligned corpora. In International Conference on Natural Language Processing, pages 862–873. Springer. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Sittichai Jiampojamarn, Colin Cherry, and Grzegorz Kondrak. 2010. Integrating joint n-gram features into a discriminative training network. In NAACLHLT, pages 697–700, Los Angeles, CA. Association for Computational Linguistics. Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif. 2007. Applying many-to-many alignments and Hidden Markov Models to letter-to-phoneme conversion. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 372– 379. Association for Computational Linguistics. David Kamholz, Jonathan Pool, and Susan M Colowick. 2014. PanLex: Building a resource for panlingual lexical translation. In LREC, pages 3145–3150. Christo Kirov, Ryan Cotterell, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sebastian J. Mielke, Arya McCarthy, Sandra Kübler, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. UniMorph 2.0: Universal morphology. In Proceedings of the 11th Language Resources and Evaluation Conference, Miyazaki, Japan. European Language Resource Association. Christo Kirov, John Sylak-Glassman, Rebecca Knowles, Ryan Cotterell, and Matt Post. 2017. A rich morphological tagger for English: Exploring the cross-linguistic tradeoff between morphology and syntax. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 112–117. Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 104– 111. Association for Computational Linguistics. Ying Lin, Xiaoman Pan, Aliya Deri, Heng Ji, and Kevin Knight. 2016. Leveraging entity linking and related language projection to improve name transliteration. In Proceedings of the Sixth Named Entity Workshop, pages 1–10. Patrick Littel, David R Mortensen, and Lori Levin. 2016. URIEL typological database. Pittsburgh: CMU. Peter Makarov and Simon Clematide. 2018. Neural transition-based string transduction for limitedresource setting in morphology. In Proceedings of the 27th International Conference on Computational Linguistics, pages 83–93. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Thomas Mayer and Michael Cysouw. 2014. Creating a massively parallel Bible corpus. Oceania, 135(273):40. Radu Soricut and Franz Och. 2015. Unsupervised morphology induction using word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1627–1637. Shyam Upadhyay, Jordan Kodner, and Dan Roth. 2018. Bootstrapping transliteration with constrained discovery for low-resource languages. In EMNLP. David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the first international conference on Human language technology research, pages 1– 8. Association for Computational Linguistics.
2019
172
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1775–1786 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1775 Adversarial Multitask Learning for Joint Multi-Feature and Multi-Dialect Morphological Modeling Nasser Zalmout and Nizar Habash Computational Approaches to Modeling Language Lab New York University Abu Dhabi United Arab Emirates {nasser.zalmout,nizar.habash}@nyu.edu Abstract Morphological tagging is challenging for morphologically rich languages due to the large target space and the need for more training data to minimize model sparsity. Dialectal variants of morphologically rich languages suffer more as they tend to be more noisy and have less resources. In this paper we explore the use of multitask learning and adversarial training to address morphological richness and dialectal variations in the context of full morphological tagging. We use multitask learning for joint morphological modeling for the features within two dialects, and as a knowledge-transfer scheme for crossdialectal modeling. We use adversarial training to learn dialect invariant features that can help the knowledge-transfer scheme from the high to low-resource variants. We work with two dialectal variants: Modern Standard Arabic (high-resource “dialect”1) and Egyptian Arabic (low-resource dialect) as a case study. Our models achieve state-of-the-art results for both. Furthermore, adversarial training provides more significant improvement when using smaller training datasets in particular. 1 Introduction Morphological tagging for morphologically rich languages (MRL) involves modeling interdependent features, with a large combined target space. Joint modeling of the different features, through feature concatenation, results in a large target space with increased sparsity. Whereas total separation of the different feature models eliminates access to the other features, which constrains the model. These issues are further exacerbated for dialectal content, with many morphosyntactic variations that further complicate the modeling. 1We view Arabic as a collective of dialectal variants in which MSA is the main high-resource dialect, and EGY is a low-resource dialect. We therefore use “variant” and “dialect” interchangeably. In this paper we work with Modern Standard Arabic (MSA) and Egyptian Arabic (EGY), both MRLs, and dialectal variants. Written Arabic text is also highly ambiguous, due to its diacriticoptional orthography, resulting in several interpretations of the same surface forms, and further increasing sparsity. Joint modeling is particularly promising for such ambiguous nature as it supports identifying more complex patterns involving multiple features. In EGY, for example, the suffix A K nA ‘we, us, our’ in the word A JƒPX drsnA can be the subject of the perfective 1st person plural verb (‘we studied’), the 1st person plural object clitic of a perfective 3rd person masculine singular verb (‘he taught us’), or the 1st person plural possessive pronoun for the nominal (‘our lesson’), among other possible interpretations. Morphological tagging models rely heavily on the availability of large annotated training datasets. Unlike MSA, Arabic Dialects are generally low on resources. In this paper we also experiment with knowledge-transfer models from high to low-resource variants. The similarities between the Arabic variants, both for MSA and Dialectal Arabic (DA), like EGY, should facilitate knowledge-transfer, making use of the resources of the high-resource variants. We use multitask learning architectures in several configurations for cross-dialectal modeling. We further investigate the best approaches and configurations to use word and character embeddings in the cross-dialectal multitask learning model, and whether mapping the various pretrained word embedding spaces is beneficial. Despite having several contributions in the literature, the role of mapped embedding spaces has not been studied in the context of joint morphological modeling of different dialects. Finally, we use adversarial training to learn dialect-invariant features for MSA and EGY. The intuition is to make the modeling spaces for both 1776 variants closer to each other, which should facilitate the knowledge-transfer scheme from the highresource (MSA) to the low-resource (EGY) sides. Our models achieve state-of-the-art morphological disambiguation results for both MSA and EGY, with up to 10% relative error reduction. Adversarial training proved more useful when using a smaller EGY training datasets in particular, simulating lower-resource settings. The contributions of the paper include (1) a joint multifeature and cross-dialectal morphological disambiguation model for several MRL variants, (2) adversarial training for cross-dialectal morphological knowledge-transfer. 2 Linguistic Motivation MRLs, like Arabic, have many morphemes that represent several morphological features. The target space for the combined morphological features in MRLs therefore tends to be very large. MRLs also tend to have more inflected words than other languages. MRLs also usually have a higher degree of ambiguity, with different interpretations of the same surface form. In Arabic, this ambiguity is exacerbated by the diacritization-optional orthography, which results in having about 12 analyses per word on average (Habash, 2010). One approach to model morphological richness and ambiguity is to use morphological analyzers, which are used to encode all potential word inflections in the language. The ideal morphological analyzer should return all the possible analyses of a surface word (modeling ambiguity), and cover all the inflected forms of a word lemma (modeling morphological richness). The best analysis is then chosen through morphological disambiguation, which is essentially part-of-speech tagging for all the features in addition to lemma and diacritized form choices. MSA is the written Arabic that is mainly used in formal settings. DA, like EGY, on the other hand, is the primarily spoken language used by native Arabic speakers in daily exchanges. DA has recently seen an increase in written content, due to the growing social media use in the region. DA, similar to MSA, is also morphologically rich, with a high degree of ambiguity. DA spans many Arabic dialects that are used across the Arab World, and they vary by the regions and cities they are used in (Bouamor et al., 2018). The large number of DA variants, along with it being mainly spoken, result in DA being usually low on resources. MSA and DA have many morphological, lexical and syntactic similarities that a cross-dialectal model can leverage (Habash et al., 2012). DA has many MSA cognates, both MSA and DA use the same script, and DA content in general includes a lot of code-switching with MSA.2 These similarities can be useful in a joint learning model, enabling a knowledge-transfer scheme, especially from the high-resource to low-resource variants. In this paper we focus on EGY as an example of DA. The set of morphological features that we model for both MSA and EGY can be: • Open-Set Features: Lemmas (lex) and diacritized forms (diac), henceforth "lexicalized features". These features are unrestricted and have large and open vocabularies. • Closed-Set Features: A set of 14 features, including inflectional features and clitics, each with a corresponding set of values/tags that are predicted using taggers. The inflectional features include: part-of-speech (POS), aspect (asp), case (cas), gender (gen), person (per), number (num), mood (mod), state (stt), voice (vox). The clitics include: enclitics, like pronominal and negative particle enclitics; proclitics, like article proclitic, preposition proclitics, conjunction proclitics, question proclitics. Morphological disambiguation involves predicting the values for each of these features, then using these predictions to rank the different analyses from the morphological analyzer. 3 Background and Related Work Joint Modeling in NLP Joint NLP modeling in general has been an active area of research throughout the past several years, supported by recent updates in deep learning architectures. Multitask learning models have been proven very useful for several NLP tasks and applications, (Collobert et al., 2011; Søgaard and Goldberg, 2016; Alonso and Plank, 2017; Bingel and Søgaard, 2017; Hashimoto et al., 2017). Inoue et al. (2017) used multitask learning for fine-grained POS tagging in MSA. We extend their work by doing cross-dialectal modeling and various contributions for low-resource dialects. 2Although EGY, like DA in general, does not have a standardized orthography like MSA (Habash et al., 2018). 1777 Cross-Lingual Transfer Cross-lingual morphology and syntax modeling has also been a very active NLP research area, with contributions in morphological reinflection and paradigm completion (Aharoni et al., 2016; Faruqui et al., 2016; Kann et al., 2017), morphological tagging (Buys and Botha, 2016; Cotterell and Heigold, 2017), parsing (Guo et al., 2015; Ammar et al., 2016), among others. Cotterell and Heigold (2017) used multitask learning for multi-lingual POS tagging, similar in spirit to our approach. Their architecture, however, models the morphological features in each language in a single task, where each target value represents all morphological features combined. This architecture is not suitable for MRLs, with large target spaces. Adversarial Domain Adaptation Inspired by the work of Goodfellow et al. (2014), adversarial networks have been used to learn domain invariant features in models involving multiple domains, through domain adversarial training (Ganin and Lempitsky, 2015; Ganin et al., 2016). Adversarial training facilitates domain-adaptation schemes, especially in high-resource to low-resource adaptation scenarios. The approach is based on an adversarial discriminator, which tries to identify the domain of the data, and backpropagates the negative gradients in the backward direction. This enables the model to learn shared domain features. Adversarial domain adaptation has been used in several NLP applications, including sentiment analysis (Chen et al., 2016), POS tagging for Twitter (Gui et al., 2017), relation extraction (Fu et al., 2017; Wang et al., 2018), among other applications. As far as we know, we are the first to apply adversarial domain adaptation in the context of dialectal morphological modeling. Arabic Morphological Modeling Morphological modeling for Arabic has many contributions in both MSA (Diab et al., 2004; Habash and Rambow, 2005; Pasha et al., 2014; Abdelali et al., 2016; Khalifa et al., 2016), and Dialectal Arabic (Duh and Kirchhoff, 2005; Al-Sabbagh and Girju, 2012; Habash et al., 2013). There were also several neural extensions that show impressive results (Zalmout and Habash, 2017; Zalmout et al., 2018). These contributions use separate models for each morphological feature, then apply a disambiguation step, similar to several previous models for Arabic (Habash and Rambow, 2005; Pasha et al., 2014). Shen et al. (2016) use LSTMs with word/character embeddings for Arabic tagging. Darwish et al. (2018) use a CRF model for a multi-dialect POS tagging, using a small annotated Twitter corpus. Alharbi et al. (2018) also use neural models for Gulf Arabic, with good results. 4 Baseline Tagging and Disambiguation Architecture In this section we present our baseline tagging and disambiguation architectures. We extend this architecture for joint modeling in the section that follows. 4.1 Morphological Feature Tagging We use a similar tagging architecture to Zalmout et al. (2018), based on a Bi-LSTM tagging model, for the closed-set morphological features. Given a sentence of length L {w1, w2, ..., wL}, every word wj is represented by vector vj. We use two LSTM layers to model the relevant context for each direction of the target word, using: −→ˆh j = g(vj, −→ h j−1) ←−ˆh j = g(vj, ←− h j+1) where hj is the context vector from the LSTM for each direction. We join both sides, apply a nonlinearity function, output layer, and softmax for a probability distribution. The input vector vj is comprised of: vj = [wj; sj; af j ] Where wj is the word embedding vector, sj is a vector representation of the characters within the word, and af j is a vector representing all the candidate morphological tags (from an analyzer), for feature f. We pre-train the word embeddings with Word2Vec (Mikolov et al., 2013), using a large external dataset. For the character embeddings vector sj we use an LSTM-based architecture, applied to the character sequence in each word separately. We use the last state vector as the embedding representation of the word’s characters. The morphological feature vector af j embeds the candidate tags for each feature. We use a morphological analyzer to obtain all possible feature values of the word to be analyzed, embed the 1778 Bi-LSTM ••••       Output Layer argmax softmax •••• •••• ̂"# $# •••• z%& '& $&    z%# (# ) '#          Output Layer argmax softmax ̂"& (& ) Figure 1: The overall tagging architecture, with the input vector as the concatenation of the word, characters, and candidate tag embeddings. values using a feature-specific embedding tensor, then sum all the resulting vectors for each feature: af j = Nf X n=1 af j,n Where Nf is the maximum number of possible candidate tags for the word j (from the analyzer), for feature f. We sum the vectors because the tags are alternatives, and do not constitute a sequence. The af j vector does not constitute a hard constraint and can be discarded if a morphological analyzer is not used. Figure 1 shows the overall tagging architecture. 4.2 Lemmatization and Diacritization The morphological features that are non-lexical, like POS, gender, number, among others, are handled by the model presented so far, using the multitask learning architecture. Lexical features, like lemmas and diacritized forms, on the other hand, are handled with neural language models, as presented by Zalmout and Habash (2017) and Zalmout et al. (2018). The lexical features are more difficult to model jointly with the non-lexical features, as they have large target spaces, and modeling them as classification tasks is not feasible. 4.3 Full Morphological Disambiguation The predicted feature values for each word, whether from the tagger or the language models, can be returned directly if we do not use a morphological analyzer, without an explicit ranking step. If a morphological analyzer is used, the disambiguation system selects the optimal analysis for the word from the set of analyses returned by the morphological analyzer. We use the predicted feature values from the taggers and language models to rank the analyses, and select the analysis with highest number of matched feature values. We also use weighted matching; where instead of assigning ones and zeros for the matched/mismatched features, we use a featurespecific matching weight. We replicate the morphological disambiguation pipeline presented in earlier contributions (Zalmout and Habash, 2017; Zalmout et al., 2018), and use the same parameter values and feature weights. 5 Multitask Learning Architecture Most of the previous approaches for morphological tagging in Arabic learn a separate model for each morphological feature, and combine the predicted tags for disambiguation (Pasha et al., 2014; Zalmout and Habash, 2017; Zalmout et al., 2018). This hard separation eliminates any knowledge sharing among the different features when training and tagging. Joint learning, through parameter sharing in multitask learning, helps prune the space of target values for some morphological features, and reduce sparsity. The separation of the morphological models is also inefficient in terms of execution complexity. Training 14 different models, and running them all during runtime, is very wasteful in terms of execution time, memory footprint, and disk space. Multitask learning is particularly useful in tasks with relatively complementary models, and usually involves primary and auxiliary tasks. We use multitask learning for joint training of the various morphological features. We extend the morphological tagging architecture presented at the previous section into a multitask learning model. We learn the different morphological features jointly through sharing the parameters of the hidden layers in the Bi-LSTM network. The input is also shared, through the word and character embeddings. We also use a unified feature-tags vector representation for all features, through concatenating the af j vectors for each feature of each word: aj = [apos j ; ...; anum j ; ...; avox j ] The output layer is separate for each morphological feature, with separate softmax and argmax operations. The loss function is the average of the individual feature losses, which are based on min1779 Bi-LSTM z!" #" $" •••• %" •••• z!& #& $& %& •••• •••• ̂(" )*+ Out Layer ̂(" ,*̂(& )*+ ̂(& ,*•••• •••• ••••             argmax softmax Out Layer argmax softmax Out Layer •••• argmax softmax Out Layer argmax softmax       Figure 2: The multitask learning architecture, having separate output layers for each feature. imizing cross entropy H for each feature f: H( ˆT, T) = 1 |F| X f∈F H( ˆtf, tf) Where T represents the combined morphological tags for each word, and F is the set of features {pos, asp, ..., vox}. Figure 2 shows the overall architecture for tagging using multitask learning. 6 Cross-Dialectal Model Joint morphological modeling of high-resource and low-resource languages can be very beneficial as a knowledge-transfer scheme. Knowledgetransfer is more viable for languages that share linguistic similarities. In the context of DA, the linguistic similarities between MSA and the dialects, along with the MSA cognates common in DA, should allow for an efficient transfer model. We train the model through dividing the datasets of each variant into batches, and running one variant-specific batch at a time. We introduce various extensions to the multitask learning architecture for cross-dialectal modeling. These include sharing the embeddings for the pretrained word embeddings and character embeddings, sharing the output layers for the different features, and adversarial training as a form of dialect adaptation. The decisions of shared vs joint modeling throughout the various architecture choices will also affect the size of the model and number of parameters. 6.1 Shared Embeddings Pretrained embeddings have been shown to be very beneficial for several NLP tasks in Arabic (Zalmout and Habash, 2017; Erdmann et al., 2018; Watson et al., 2018). In the context of joint modeling of different variants, pretrained embeddings can either be learnt separately or jointly, with several different configurations that include: • Separate embedding spaces, through separate models for the different dialects, trained on separate datasets. • Merged embedding datasets, by merging the datasets for the different dialects and train a single embedding model. This approach is viable because the different Arabic variants use the same script, and DA usually involves a lot of code-switching with MSA. • Mapped embedding spaces, by training separate models for each dialect, then mapping the embedding spaces together. We use VECMAP (Artetxe et al., 2016, 2017) to map the embedding spaces of the different variants (MSA and DA). VECMAP uses a seed dictionary to learn a mapping function that minimizes the distances between seed dictionary unigram pairs. In addition to shared word embeddings, the character-level embeddings can also be learned separately or jointly. We do not use pretrained embeddings for the characters, and the embeddings are learnt as part of the end-to-end system. 6.2 Shared Output Layers In the multitask learning architecture, each of the different morphological features needs a separate output layer. In our experiments with Arabic, we are modeling 14 morphological features, which requires 14 output layers. For cross-dialectal modeling, we can have separate output layers for each dialect, which results in 28 output layers for MSA and EGY. Another design choice in this case is to share the output layers between the different dialects, regardless of how many dialects are modeled jointly, with 14 shared output layers only. Despite the morphological features being similar across the dialects, the target space for each feature might vary slightly for each dialect (as in proclitics and enclitics). In the case of shared output layers, we have to merge the target space values for the features of the different dialects, and use this combined set as the target vocabulary. 6.3 Adversarial Dialect Adaptation Similar to adversarial domain adaptation, the goal of the adversarial dialect adaptation approach is to learn common features for the different dialects through an adversarial discriminator. Learning dialect-invariant features would facilitate a 1780 richer knowledge-transfer scheme from the highresource to the low-resource variants, since they are both modeled in the same invariant space. Adversarial adaptation can make use of a large annotated dataset from the high-resource dialect, unlabeled low-resource dialect data, and a small annotated low-resource dialect dataset. Adversarial adaptation learns dialect invariant features through backpropagating the negative gradients in the backward direction for the discriminator. The backward/forward propagation is managed by the Gradient Reversal Layer. Figure 3 shows the architecture with the discriminator task. Gradient Reversal Layer Presented by Ganin and Lempitsky (2015), the gradient reversal layer (GRL) passes the identity function in the forward propagation, but negates the gradients it receives in backward propagation, i.e. g(F(x)) = F(x) in forward propagation, but ∆g(F(x)) = −λ∆F(x) in backward propagation. λ is a weight parameter for the negative gradient, which can have an update schedule. λ is used to control the dissimilarity of features at the various stages of training. It can be small at the beginning of training to facilitate better morphological modeling, then increased to learn domain invariant features later on. Bi-LSTM z!" #" $" •••• %" •••• z!& #& $& %& •••• •••• ̂(" )*+ ̂(" ,*̂(& )*+ ̂(& ,*••••       ̂(" ./01234 GRL Out Layer •••• argmax softmax Out Layer argmax softmax Out Layer •••• argmax softmax Out Layer argmax softmax argmax softmax Out Layer ̂(& ./01234 GRL argmax softmax Out Layer •••• •••• •••• •••• ••••             Figure 3: The adversarial adaptation architecture, with a discriminator task that backpropagates negative gradients using the Gradient Reversal Layer (GRL). Training Process For each of the training batches, we populate half of the batch with samples from the morphologically labeled data, and the other half with the unlabeled data. The model calculates the morphological tagging loss for the first half, and the discriminator loss with the other, and optimizes for both jointly. 7 Experiments and Results In this section we first discuss the datasets that we use, along with the experimental setup for the various experiments. We then discuss the results of the different models, using the full training datasets, and a learning curve over the EGY dataset, to simulate low-resource settings. 7.1 Data Labeled Data For MSA we use the Penn Arabic Treebank (PATB parts 1, 2, and 3) (Maamouri et al., 2004). For EGY, we use the ARZ Treebank (ARZTB) annotated corpus from the Linguistic Data Consortium (LDC), parts 1, 2, 3, 4, and 5 (Maamouri et al., 2012). The annotation process and features are similar to those of MSA. We follow the data splits recommended by Diab et al. (2013) for training, development, and testing, for both MSA and EGY. Table 1 shows the data sizes. Throughout the different experiments in this paper, the DEV TEST dataset is used during the system development to assess design choices. The BLIND TEST dataset is used after finalizing the architecture, to evaluate the system and present the overall results. We use Alif/Ya and Hamza normalization, and we remove all diacritics (besides for lemmas and diacritized forms) for all variants. TRAIN DEV TEST BLIND TEST MSA 503K 63K 63K EGY 134K 21K 20K Table 1: Word count statistics for MSA and EGY. The morphological analyzers that we use include SAMA (Graff et al., 2009) for MSA, and a combination of SAMA, CALIMA (Habash et al., 2012), and ADAM (Salloum and Habash, 2014) for EGY, as used in the MADAMIRA (Pasha et al., 2014) system. Unlabeled Data The pretrained word embeddings for MSA are trained using the LDC’s Gigaword corpus (Parker et al., 2011). For EGY we use about 410 million words of the Broad Operational Language Translation (BOLT) Arabic Forum Discussions (Tracey et al., 2018). We use the MADAR corpus (Bouamor et al., 2018) as the seed dictionary for embedding space mapping. We use the EGY data from the work by Zbib et al. (2012) as the unlabeled corpus for EGY. 1781 7.2 Experimental Setup Tagging Architecture We use two hidden layers of size 800 for the Bi-LSTM network (two for each direction), and a dropout wrapper with keep probability of 0.7, and peephole connections. We use Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.0005, and cross-entropy cost function. We run the various models for 70 epochs (fixed number of epoch since we use dropout). The LSTM character embedding architecture uses two LSTM layers of size 100, and embedding size 50. We use Word2Vec (Mikolov et al., 2013) to train the word embeddings. The embedding size is 250, and the embedding window is of size two. Adversarial Adaptation For the adversarial adaptation experiments we first observed that the average sentence length in the unlabeled EGY dataset is very short compared to the MSA dataset (5 words per sentence for the unlabeled dataset, and 31 words per sentence for MSA). The difference in sentence length results in the unlabeled EGY dataset being four times the number of batches compared to MSA, for the same number of tokens, and the model was not converging. We therefore use a minimum sentence length of 14 words for the unlabeled dataset, which results in about 9K sentences (∼185K tokens). We also found that a constant λ value of one performed better than scheduling the value starting from zero. Metrics The evaluation metrics we use include: • POS accuracy (POS): The accuracy of the POS tags, of a tagset comprised of 36 tags (Habash et al., 2013). • The non-lexicalized morphological features accuracy (FEATS): The accuracy of the combined 14 closed morphological features. • Lemmatization accuracy (LEMMA): The accuracy of the fully diacritized lemma. • Diacritized forms accuracy (DIAC): The accuracy of the diacritized form of the words. • Full Analysis Accuracy (FULL): The overall accuracy over the full analysis; FEATS (including POS)+LEMMA+DIAC, which is the strictest evaluation approach. Baselines The baselines are based on separate models for the different features. The first baseline is MADAMIRA (Pasha et al., 2014), which is a popular morphological disambiguation tool for Arabic. MADAMIRA uses SVM taggers for the different non-lexical features, and n-gram language models for the lemmas and diacritized forms. We also use the neural extensions of MADAMIRA (Zalmout and Habash, 2017; Zalmout et al., 2018), which are based on a similar architecture, but use LSTM taggers instead of the SVM models, and LSTM-based language models instead of the n-gram models. 7.3 Results To evaluate the performance of the knowledgetransfer scheme, we present the results in two parts. The first presents the results for the full MSA and EGY datasets, evaluating the accuracy of the various architecture configurations. We then present the results of a learning curve over the size of the EGY training dataset, modeling various degrees of low-resource performance. The goal is to assess the multitask learning and adversarial training models in particular, and the degree of knowledge-transfer, which should be more helpful when the size of the EGY training data is lower. 7.3.1 Joint Morphological Modeling Table 2 shows the results of the joint modeling of MSA and EGY. Based on the results, we make the following observations: Multi-Feature Modeling The results for the multi-feature models show consistent and significant improvement compared to the separate models for each feature, especially for MSA. This supports the assumption that multi-feature modeling can identify more complex patterns involving multiple features, that separate models cannot. Cross-Dialectal Modeling: Merged Training Data vs Multitask Learning For the crossdialectal MSA and EGY models, we first experiment with merging the training datasets for both, and train a single model over the merged datasets. This model is a simple baseline for the crossdialectal models, but imposes hard joint modeling that might lead to some knowledge loss. The results indicate that the multitask learning architecture performs much better, especially for MSA. The accuracy for POS tagging for EGY in particular was higher or similar though. This is probably because POS behaves very similarly in both MSA and EGY, unlike other morphological features that might converge slightly. So the added MSA training samples were generally helpful. 1782 MODEL DEV TEST BLIND TEST FULL FEATS DIAC LEX POS FULL FEATS DIAC LEX POS MADAMIRAMSA (Pasha et al., 2014) 85.6 87.1 87.7 96.3 97.1 85.6 87.3 87.6 96.3 97.0 MSAseparate features (Zalmout and Habash, 2017) 90.4 92.3 92.4 96.9 97.9 90.1 92.3 92.1 96.6 97.8 MSAMTL:MSA 90.8 92.7 92.7 96.9 97.9 90.8 93.0 92.5 96.7 97.9 MSAMSA+EGY merged training datasets 90.1 91.9 91.8 96.9 97.8 89.8 92.0 91.4 96.5 97.7 MSAMTL:MSA+EGY mapped embedding spaces 90.6 92.5 92.4 96.8 97.8 90.3 92.5 91.9 96.5 97.7 MSAMTL:MSA+EGY merged embedding corpora 91.1 93.0 92.9 96.9 97.9 91.0 93.2 92.6 96.7 98.0 MSAMTL:MSA+EGY separate embedding spaces 91.2 93.1 92.9 97.0 98.0 91.1 93.3 92.7 96.7 98.0 + shared output layers per feature 91.4 93.3 93.1 97.0 98.0 91.2 93.4 92.8 96.8 98.0 + shared character embeddings 91.2 93.1 93.0 97.0 98.0 91.1 93.3 92.7 96.7 97.9 MSAMTL:MSA+EGY Adversarial Dialect Adaptation* 91.3 93.2 93.0 97.0 98.0 91.2 93.3 92.8 96.7 97.9 MADAMIRAEGY (Pasha et al., 2014) 76.2 86.7 82.4 86.4 91.7 77.3 86.9 83.3 87.3 91.8 EGYseparate features (Zalmout et al., 2018) 77.0 88.8 82.9 87.6 92.9 78.0 88.8 83.6 87.8 93.3 EGYMTL:EGY 77.2 88.8 82.9 87.6 93.1 78.1 88.8 83.5 88.0 93.4 EGYMSA+EGY merged training datasets 77.1 88.9 82.7 87.6 93.5 78.2 89.0 83.5 88.0 93.8 EGYMTL:MSA+EGY mapped embedding spaces 76.7 88.3 82.6 87.3 92.7 78.0 88.6 83.3 87.8 93.3 EGYMTL:MSA+EGY merged embedding corpora 77.2 89.0 82.9 87.7 93.1 78.1 88.9 83.5 88.0 93.5 EGYMTL:MSA+EGY separate embedding spaces 77.3 89.0 83.0 87.7 93.1 78.4 89.2 83.7 88.0 93.6 + shared output layers per feature 77.4 89.1 83.0 87.7 93.2 78.5 89.3 83.8 88.0 93.7 + shared character embeddings 77.3 89.0 82.9 87.7 93.2 78.2 89.1 83.6 88.1 93.7 EGYMTL:MSA+EGY Adversarial Dialect Adaptation* 77.5 89.3 83.1 87.7 93.3 78.6 89.4 83.8 88.1 93.8 Table 2: Disambiguation results for joint MSA and EGY modeling. MTL is Multitask Learning. *Best adversarial result was with merged embedding spaces. Embedding Models Joint embedding spaces between the dialects, whether through embedding space mapping or through learning the embeddings on the combined corpus, did not perform well. Using separate embedding models (whether for word or character embeddings) for each dialect shows better accuracy. Embedding models learn properties and morphosyntactic structures that are specific to the training data. Mapping the embedding spaces likely results in some knowledge loss. Unlike the adversarial training model though, at which the merged embedding datasets model performed better. This is expected since the goal of adversarial training is to bring the overall feature spaces closer to learn dialect-invariant features. Shared Output Layers The results indicate that using shared output layers for the different dialects improves the overall accuracy. Shared output layers are more likely to learn shared morphosyntactic structures from the other dialect, thus helping both. Having separate layers wastes another joint learning potential. The shared output layers further reduce the size of the overall model. Adversarial Dialect Adaptation The adversarial adaptation experiments show slightly higher results for EGY, but very close results to the multitask learning model for MSA. Since MSA is resource-rich it is expected that adversarial training would not be beneficial (or even hurtful), as the dialect-invariant features would hinder the full utilization of the rich MSA resources. For EGY, we expect that the knowledge-transfer model would be more beneficial in lower-resource scenarios, we therefore experiment with a learning curve for the training dataset size in the next section. 7.3.2 Modeling Training Data Scarcity EGY TRAIN SIZE EGY MSA-EGY MTL ADV 2K (1.5%) 29.7 61.9 71.1 8K (6%) 62.5 73.5 78.3 16K (12%) 74.7 78.1 81.5 33K (25%) 80.7 81.6 83.5 67K (50%) 83.3 82.0 84.0 134K (100%) 84.5 85.4 85.6 Table 3: The results (FEATS) of the learning curve over the EGY training dataset, for the EGY dataset alone, multitask learning (MTL), and the adversarial training (ADV). We do not use morphological analyzers here, so the results are not comparable to Table 2. Knowledge-transfer schemes are more valuable in low-resource settings for the target language. To simulate the behavior of the multitask and adversarial learning architectures in such setting, we train the model using fractions of the EGY training data. We reduce the training dataset size by a factor of two each time. We then simulate extreme scarcity, having only 2K EGY annotated tokens. Low-resource dialects will have very limited 1783 or no morphological analyzers, so we also simulate the lack of morphological analyzers for EGY. Since we are not using an EGY morphological analyzer, we evaluate the models on the set of nonlexicalized and clitics features only, without the diacritized forms and lemmas. We also do not perform an explicit disambiguation step through analysis ranking, and we evaluate on the combined morphological tags directly for each word. Table 3 shows the results. Multitask learning with MSA consistently outperforms the models that use EGY data only. The accuracy almost doubles in the 2K model. We also notice that the accuracy gap increases as the EGY training dataset size decreases, highlighting the importance of joint modeling with MSA in low-resource DA settings. The adversarial adaptation results in the learning curve further show a significant increase in accuracy with decreasing training data size, compared to the multitask learning results. The model seems to be facilitating more efficient knowledgetransfer, especially for the lower-resource EGY experiments. We can also observe that for the extreme low-resource setting, we can double the accuracy through adversarial multitask learning, achieving about 58% relative error reduction. The results also indicate that with only 2K EGY annotated tokens, and with adversarial multitask learning with MSA, we can achieve almost the same accuracy as 16K tokens using EGY only. This is a significant result, especially when commissioning new annotation tasks for other dialects. Error Analysis We investigated the results in the learning curve to understand the specific areas of improvement with multitask learning and adversarial training. We calculated the accuracies of each of the features, for both models, and across all the dataset sizes. We observed that the POS and Gender features benefited the most of the joint modeling techniques. Whereas features like Mood and Voice benefited the least. This is probably due to the relatively similar linguistic behavior for POS and Gender in both MSA and EGY, unlike Mood or Voice, which are less relevant to DA, and can be somewhat inconsistent with MSA. The improvement was consistent for both approaches, and across the training data sizes, with POS having almost 61% relative error reduction in the 2K dataset with adversarial training, and Mood (the least improving feature) of about 8%. And 8% for POS, and 0% for Mood, in the full size dataset. 8 Conclusions and Future Work In this paper we presented a model for joint morphological modeling of the features in morphologically rich dialectal variants. We also presented several extensions for cross-dialectal modeling. We showed that having separate embedding models, but shared output layers, performs the best. Joint modeling for the features within each dialect performs consistently better than having separate models, and joint cross-dialectal modeling performs better than dialect-specific models. We also used adversarial training to facilitate a knowledge-transfer scheme, providing the best result for EGY, especially in lower-resource cases. Our models result in state-of-the-art results for both MSA, and EGY. Future work includes joint and cross-dialectal lemmatization models, in addition to further extension to other dialects. Acknowledgment The first author was supported by the New York University Abu Dhabi Global PhD Student Fellowship program. The support and resources from the High Performance Computing Center at New York University Abu Dhabi are also gratefully acknowledged. References Ahmed Abdelali, Kareem Darwish, Nadir Durrani, and Hamdy Mubarak. 2016. Farasa: A fast and furious segmenter for Arabic. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 11–16, San Diego, California. Roee Aharoni, Yoav Goldberg, and Yonatan Belinkov. 2016. Improving sequence to sequence learning for morphological inflection generation: The biumit systems for the sigmorphon 2016 shared task for morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 41–48. Rania Al-Sabbagh and Roxana Girju. 2012. A supervised POS tagger for written Arabic social networking corpora. In Proceedings of KONVENS 2012, pages 39–52. OGAI. Main track: oral presentations. Randah Alharbi, Walid Magdy, Kareem Darwish, Ahmed Abdelali, and Hamdy Mubarak. 2018. Partof-Speech Tagging for Arabic Gulf Dialect Using Bi-LSTM. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA). Héctor Martínez Alonso and Barbara Plank. 2017. When is multitask learning effective? semantic sequence prediction under varying data conditions. In 1784 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 44– 53. Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2016. Many languages, one parser. Transactions of the Association for Computational Linguistics, 4:431–444. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In EMNLP, pages 2289–2294. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 451–462. Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 164–169, Valencia, Spain. Association for Computational Linguistics. Houda Bouamor, Nizar Habash, Mohammad Salameh, Wajdi Zaghouani, Owen Rambow, Dana Abdulrahim, Ossama Obeid, Salam Khalifa, Fadhl Eryani, Alexander Erdmann, and Kemal Oflazer. 2018. The MADAR Arabic dialect corpus and lexicon. In The International Conference on Language Resources and Evaluation, Miyazaki, Japan. Jan Buys and Jan A Botha. 2016. Cross-lingual morphological tagging for low-resource languages. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1954–1964. Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2016. Adversarial deep averaging networks for cross-lingual sentiment classification. ArXiv e-prints. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493–2537. Ryan Cotterell and Georg Heigold. 2017. Crosslingual character-level neural morphological tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 748–759, Copenhagen, Denmark. Association for Computational Linguistics. Kareem Darwish, Hamdy Mubarak, Ahmed Abdelali, Mohamed Eldesouki, Younes Samih, Randah Alharbi, Mohammed Attia, Walid Magdy, and Laura Kallmeyer. 2018. Multi-Dialect Arabic POS Tagging: A CRF Approach. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA). Mona Diab, Nizar Habash, Owen Rambow, and Ryan Roth. 2013. LDC Arabic treebanks and associated corpora: Data divisions manual. arXiv preprint arXiv:1309.5652. Mona Diab, Kadri Hacioglu, and Daniel Jurafsky. 2004. Automatic Tagging of Arabic Text: From Raw Text to Base Phrase Chunks. In Proceedings of the 5th Meeting of the North American Chapter of the Association for Computational Linguistics/Human Language Technologies Conference (HLT-NAACL04), pages 149–152, Boston, MA. Kevin Duh and Katrin Kirchhoff. 2005. POS tagging of dialectal Arabic: a minimally supervised approach. In Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages, Semitic ’05, pages 55–62, Ann Arbor, Michigan. Alexander Erdmann, Nasser Zalmout, and Nizar Habash. 2018. Addressing noise in multidialectal word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 558– 565, Melbourne, Australia. Association for Computational Linguistics. Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection generation using character sequence to sequence learning. In Proceedings of NAACL-HLT, pages 634–643. Lisheng Fu, Thien Huu Nguyen, Bonan Min, and Ralph Grishman. 2017. Domain adaptation for relation extraction with domain adversarial neural network. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 425–429. Asian Federation of Natural Language Processing. Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proceedings of the 32nd International Conference on International Conference on Machine LearningVolume 37, pages 1180–1189. JMLR. org. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. J. Mach. Learn. Res., 17(1):2096–2030. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc. David Graff, Mohamed Maamouri, Basma Bouziri, Sondos Krouna, Seth Kulick, and Tim Buckwalter. 2009. Standard Arabic Morphological Analyzer (SAMA) Version 3.1. Linguistic Data Consortium LDC2009E73. Tao Gui, Qi Zhang, Haoran Huang, Minlong Peng, and Xuanjing Huang. 2017. Part-of-speech tagging for 1785 twitter with adversarial neural networks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2411– 2420. Association for Computational Linguistics. Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual dependency parsing based on distributed representations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1234–1244. Nizar Habash, Fadhl Eryani, Salam Khalifa, Owen Rambow, Dana Abdulrahim, Alexander Erdmann, Reem Faraj, Wajdi Zaghouani, Houda Bouamor, Nasser Zalmout, Sara Hassan, Faisal Al-Shargi, Sakhar Alkhereyf, Basma Abdulkareem, Ramy Eskander, Mohammad Salameh, and Hind Saddiki. 2018. Unified guidelines and resources for Arabic dialect orthography. In Proceedings of the 11th Language Resources and Evaluation Conference, Miyazaki, Japan. European Language Resource Association. Nizar Habash, Ramy Eskander, and Abdelati Hawwari. 2012. A Morphological Analyzer for Egyptian Arabic. In Proceedings of the Twelfth Meeting of the Special Interest Group on Computational Morphology and Phonology, pages 1–9, Montréal, Canada. Nizar Habash and Owen Rambow. 2005. Arabic Tokenization, Part-of-Speech Tagging and Morphological Disambiguation in One Fell Swoop. In Proceedings of the 43rd Annual Meeting of the ACL, pages 573–580, Ann Arbor, Michigan. Nizar Habash, Ryan Roth, Owen Rambow, Ramy Eskander, and Nadi Tomeh. 2013. Morphological Analysis and Disambiguation for Dialectal Arabic. In Proceedings of NAACL-HLT, pages 426–432, Atlanta, Georgia. Nizar Y Habash. 2010. Introduction to Arabic natural language processing, volume 3. Morgan & Claypool Publishers. Kazuma Hashimoto, Yoshimasa Tsuruoka, Richard Socher, et al. 2017. A joint many-task model: Growing a neural network for multiple nlp tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1923– 1933. Go Inoue, Hiroyuki Shindo, and Yuji Matsumoto. 2017. Joint prediction of morphosyntactic categories for fine-grained Arabic part-of-speech tagging exploiting tag dictionary information. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 421–431. Katharina Kann, Ryan Cotterell, and Hinrich Schütze. 2017. One-shot neural cross-lingual transfer for paradigm completion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1993–2003. Salam Khalifa, Nasser Zalmout, and Nizar Habash. 2016. YAMAMA: Yet Another Multi-Dialect Arabic Morphological Analyzer. In Proceedings of the International Conference on Computational Linguistics (COLING): System Demonstrations, pages 223–227. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Mohamed Maamouri, Ann Bies, Tim Buckwalter, and Wigdan Mekki. 2004. The Penn Arabic Treebank: Building a Large-Scale Annotated Arabic Corpus. In NEMLAR Conference on Arabic Language Resources and Tools, pages 102–109, Cairo, Egypt. Mohamed Maamouri, Sondos Krouna, Dalila Tabessi, Nadia Hamrouni, and Nizar Habash. 2012. Egyptian Arabic Morphological Annotation Guidelines. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Robert Parker, David Graff, Ke Chen, Junbo Kong, and Kazuaki Maeda. 2011. Arabic Gigaword Fifth Edition. LDC catalog number No. LDC2011T11, ISBN 1-58563-595-2. Arfath Pasha, Mohamed Al-Badrashiny, Ahmed El Kholy, Ramy Eskander, Mona Diab, Nizar Habash, Manoj Pooleery, Owen Rambow, and Ryan Roth. 2014. MADAMIRA: A Fast, Comprehensive Tool for Morphological Analysis and Disambiguation of Arabic. In In Proceedings of LREC, Reykjavik, Iceland. Wael Salloum and Nizar Habash. 2014. ADAM: Analyzer for Dialectal Arabic Morphology. Journal of King Saud University-Computer and Information Sciences, 26(4):372–378. Qinlan Shen, Daniel Clothiaux, Emily Tagtow, Patrick Littell, and Chris Dyer. 2016. The role of context in neural morphological disambiguation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 181–191, Osaka, Japan. The COLING 2016 Organizing Committee. Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 231– 235, Berlin, Germany. Association for Computational Linguistics. Jennifer Tracey, Haejoong Lee, Stephanie Strassel, and Safa Ismael. 2018. BOLT Arabic Discussion Forum Source Data. LDC catalog number LDC2018T10. Xiaozhi Wang, Xu Han, Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2018. Adversarial multi-lingual neural relation extraction. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1156–1166. Association for Computational Linguistics. 1786 Daniel Watson, Nasser Zalmout, and Nizar Habash. 2018. Utilizing character and word embeddings for text normalization with sequence-to-sequence models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 837–843. Nasser Zalmout, Alexander Erdmann, and Nizar Habash. 2018. Noise-Robust Morphological Disambiguation for Dialectal Arabic. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Nasser Zalmout and Nizar Habash. 2017. Don’t Throw Those Morphological Analyzers Away Just Yet: Neural Morphological Disambiguation for Arabic. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 704–713, Copenhagen, Denmark. Association for Computational Linguistics. Rabih Zbib, Erika Malchiodi, Jacob Devlin, David Stallard, Spyros Matsoukas, Richard Schwartz, John Makhoul, Omar F Zaidan, and Chris CallisonBurch. 2012. Machine translation of Arabic dialects. In Proceedings of the 2012 conference of the North American chapter of the association for computational linguistics: Human language technologies, pages 49–59. Association for Computational Linguistics.
2019
173
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1787–1799 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1787 Neural Machine Translation with Reordering Embeddings Kehai Chen, Rui Wang∗, Masao Utiyama, and Eiichiro Sumita National Institute of Information and Communications Technology (NICT), Kyoto, Japan {khchen, wangrui, mutiyama, eiichiro.sumita}@nict.go.jp Abstract The reordering model plays an important role in phrase-based statistical machine translation. However, there are few works that exploit the reordering information in neural machine translation. In this paper, we propose a reordering mechanism to learn the reordering embedding of a word based on its contextual information. These reordering embeddings are stacked together with self-attention networks to learn sentence representation for machine translation. The reordering mechanism can be easily integrated into both the encoder and the decoder in the Transformer translation system. Experimental results on WMT’14 English-toGerman, NIST Chinese-to-English, and WAT ASPEC Japanese-to-English translation tasks demonstrate that the proposed methods can significantly improve the performance of the Transformer translation system. 1 Introduction The reordering model plays an important role in phrase-based statistical machine translation (PBSMT), especially for translation between distant language pairs with large differences in word order, such as Chinese-to-English and Japaneseto-English translations (Galley and Manning, 2008; Goto et al., 2013). Typically, the traditional PBSMT learns large-scale reordering rules from parallel bilingual sentence pairs in advance to form a reordering model. This reordering model is then integrated into the translation decoding process to ensure a reasonable order of translations of the source words (Chiang, 2005; Xiong et al., 2006; Galley and Manning, 2008). In contrast to the explicit reordering model for PBSMT, the RNN-based NMT (Sutskever et al., 2014; Bahdanau et al., 2015) depends on neural networks to implicitly encode order dependencies ∗Corresponding author between words in a sentence to generate a fluent translation. Inspired by a distortion method originating in SMT (Brown et al., 1993; Koehn et al., 2003; Al-Onaizan and Papineni, 2006), there is a quite recent preliminary exploration work for NMT (Zhang et al., 2017). They distorted the existing content-based attention by an additional position-based attention inside the fixed-size window, and reported a considerable improvement on the classical RNN-based NMT. This means that the word reordering information is also beneficial to the NMT. The Transformer (Vaswani et al., 2017) translation system relies on self-attention networks (SANs), and has attracted growing interesting in the machine translation community. The Transformer generates an ordered sequence of positional embeddings by a positional encoding mechanism (Gehring et al., 2017a) to explicitly encode the order of dependencies between words in a sentence. The Transformer is adept at parallelizing of performing (multi-head) and stacking (multi-layer) SANs to learn the sentence representation to predict translation, and has delivered state-of-the-art performance on various translation tasks (Bojar et al., 2018; Marie et al., 2018). However, these positional embeddings focus on sequentially encoding order relations between words, and does not explicitly consider reordering information in a sentence, which may degrade the performance of Transformer translation systems. Thus, the reordering problem in NMT has not been studied extensively, especially in Transformer. In this paper, we propose a reordering mechanism for the Transformer translation system. We dynamically penalize the given positional embedding of a word depending on its contextual information, thus generating a reordering embedding for each word. The reordering mechanism 1788 is then stacked together with the existing SANs to learn the final sentence representation with word reordering information. The proposed method can be easily integrated into both the encoder and the decoder in the Transformer. Experimental results on the WMT14 Englishto-German, NIST Chinese-to-English, and WAT ASPEC Japanese-to-English translation tasks verify the effectiveness and universality of the proposed approach. This paper primarily makes the following contributions: • We propose a reordering mechanism to learn the reordering embedding of a word based on its contextual information, and thus these learned reordering embeddings are added to the sentence representation for archiving reordering of words. To the best of our knowledge, this is the first work to introduce the reordering information to the Transformer translation system. • The proposed reordering mechanism can be easily integrated into the Transformer to learn reordering-aware sentence representation for machine translation. The proposed translation models outperform the state-of-the-art NMT baselines systems with a similar number of parameters and achieve comparable results compared to NMT systems with much more parameters. 2 Related Work 2.1 Reordering Model for PBSMT In PBSMT, there has been a substantial amount of research works about reordering model, which was used as a key component to ensure the generation of fluent target translation. Bisazza and Federico (2016) divided these reordering models into four groups: Phrase orientation models (Tillman, 2004; Collins et al., 2005; Nagata et al., 2006; Zens and Ney, 2006; Galley and Manning, 2008; Cherry, 2013), simply known as lexicalized reordering models, predict whether the next translated source span should be placed on the right (monotone), the left (swap), or anywhere else (discontinuous) of the last translated one. Jump models (Al-Onaizan and Papineni, 2006; Green et al., 2010) predict the direction and length of the jump that is performed between consecutively translated words or phrases, with the goal of better handling long-range reordering. Source decoding sequence models (Feng et al., 2010, 2013) address this issue by directly modeling the reordered sequence of input words, as opposed to the reordering operations that generated it. Operation sequence models are n-gram models that include lexical translation operations and reordering operations in a single generative story, thereby combining elements from the previous three model families (Durrani et al., 2011, 2013, 2014). Their method were further extended by source syntax information (Chen et al., 2017c, 2018b) to improve the performance of SMT. Moreover, to address data sparsity (Guta et al., 2015) caused by a mass of reordering rules, Li et al. (2013, 2014) modeled ITG-based reordering rules in the translation by using neural networks. In particular, the NN-based reordering models can not only capture semantic similarity but also ITG reordering constraints (Wu, 1996, 1997) in the translation context. This neural network modeling method is further applied to capture reordering information and syntactic coherence. 2.2 Modeling Ordering for NMT The attention-based NMT focused on neural networks themselves to implicitly capture order dependencies between words (Sutskever et al., 2014; Bahdanau et al., 2015; Wang et al., 2017a,b, 2018; Zhang et al., 2018). Coverage model can partially model the word order information (Tu et al., 2016; Mi et al., 2016). Inspired by a distortion method (Brown et al., 1993; Koehn et al., 2003; Al-Onaizan and Papineni, 2006) originated from SMT, Zhang et al. (2017) proposed an additional position-based attention to enable the existing content-based attention to attend to the source words regarding both semantic requirement and the word reordering penalty. Pre-reordering, a pre-processing to make the source-side word orders close to those of the target side, has been proven very helpful for the SMT in improving translation quality. Moreover, neural networks were used to pre-reorder the sourceside word orders close to those of the target side (Du and Way, 2017; Zhao et al., 2018b; Kawara et al., 2018), and thus were input to the existing RNN-based NMT for improving the performance of translations. Du and Way (2017) 1789 and Kawara et al. (2018) reported that the prereordering method had an negative impact on the NMT for the ASPEC JA-EN translation task. In particular, Kawara et al. (2018) assumed that one reason is the isolation between pre-ordering and NMT models, where both models are trained using independent optimization functions. In addition, several research works have been proposed to explicitly introduce syntax structure into the RNN-based NMT for encoding syntax ordering dependencies into sentence representations (Eriguchi et al., 2016; Li et al., 2017; Chen et al., 2017a,b; Wang et al., 2017b; Chen et al., 2018a). Recently, the neural Transformer translation system (Vaswani et al., 2017), which relies solely on self-attention networks, used a fixed order sequence of positional embeddings to encode order dependencies between words in a sentence. 3 Background 3.1 Positional Encoding Mechanism Transformer (Vaswani et al., 2017) typically uses a positional encoding mechanism to encode order dependencies between words in a sentence. Formally, given a embedding sequence of source sentence of length J, X={x1, · · · , xJ}, the positional embedding is computed based on the position of each word by Eq.(1): pe(j,2i) = sin(j/100002i/dmodel), pe(j,2i+1) = cos(j/100002i/dmodel), (1) where j is the word’s position index in the sentence and i is the number of dimensions of the position index. As a result, there is a sequence of positional embeddings: PE = {pe1, · · · , peJ}. (2) Each pej is then added to the corresponding word embedding xj as an combined embedding vj: vj = xj + pej. (3) Finally, a sequence of embeddings {v1, · · · , vJ} is the initialized sentence representation H0. Later, H0 will be input to the self-attention layer to learn the sentence representation. 3.2 Self-Attention Mechanism Following the positional embedding layer, selfattention mechanism is used to learn sentence representation over the H0 obtained in the previous section. Generally, the self-attention mechanism is a stack of N identical layers in the Transformer architecture. Each identical layer consists of two sub-layers: self-attention network, and position-wise fully connected feedforward network. A residual connection (He et al., 2016) is employed around each of two sub-layers, followed by layer normalization (Ba et al., 2016). Formally, the stack of learning the final sentence representation is organized as follows: Hn = LN(SelfAttn(Hn−1) + Hn−1) Hn = LN(FFNn(Hn) + Hn)  N , (4) where SelfAttn(·), LN(·), and FFNn(·) are selfattention network, layer normalization, and feedforward network for the n-th identical layer, respectively. [· · · ]N denotes the stack of N identical layer. In the encoder and decoder of Transformer, SelfAttn(·) computes attention over the output Hn−1 of the n-1 layer: SelfAttn(Hn−1) = softmax(QK⊤ √dk )V. (5) where {Q, K, V} are query, key and value vectors that are transformed from the input representations Hn−1. dk is the dimension size of the query and key vectors. As a result, the output of the N-th layer HN is the final sentence representation for machine translation. 4 Reordering Mechanism Intuitively, when a human translates a sentence, he or she often adjusts word orders based on the global meaning of the original sentence or its context, thus gaining one synonymous sentence which is easier to be understood and translated. It is thus clear that the reordering of a given word relies heavily on the global or contextual meaning of the sentence. Motivated by this, we use the word and its global contextual information of the sentence to gain a Reordering Embedding for each word (as shown in Figure 1), thus modeling the above human reordering process. The reordering mechanism is then stacked with the SAN layer to learn a reordering-aware sentence representation. 1790 Word embeddings X: Positional penalty embeddings 𝐏𝐏𝑛: Reordering embeddings 𝐑𝐄𝑛: … … … … … … … … Intermediate sentence hidden state ഥ𝐇𝑛: … … Original positional embeddings PE: … … = = = = = = = = = = = = The output 𝐇𝑛−1 of the n-1 layer in the stack: ഥ𝐇𝑛=LN(SelfAtt𝑛(𝐇𝑛−1) + 𝐇𝑛−1) … … … … 𝐑𝐄𝑛=𝐏𝐄∙𝐏𝐏𝑛 𝐏𝐏𝑛=sigmoid(ഥ𝐕𝑛∙tanh(𝐖𝑛𝐇𝑛−1 + ഥ𝐖𝑛ഥ𝐇𝑛)) Original positional embeddings PE: Figure 1: Learning reordering embeddings for the n-th layer in the stack. 4.1 Reordering Embeddings To capture reordering information, we first learn a positional penalty vector based on the given word and its global context of the sentence. The positional penalty vector is then used to penalize the given positional embedding of the word to generate a new, reordering embedding. Finally, these reordering embeddings are added to the intermediate sentence representation to achieve the reordering of words. We divide the process into the following three steps: Positional Penalty Vectors: The self-attention mechanism focuses on global dependencies between words to learn an intermediate sentence representation Hn, which is regarded as the expected global context of the sentence as reordered by a human translator. Therefore, given a sentence of J words, we use the output Hn−1 of the previous layer in the stack together with the new intermediate global context representation Hn to learn positional penalty vectors PPn for the n-th layer of the stack [· · · ]N: PPn = sigmoid(Vn·tanh(Wn·Hn−1+Wn·Hn)), (6) where Wn∈Rdmodel×dmodel, Wn∈Rdmodel×dmodel, and Vn∈Rdmodel×dmodel are the parameters of model. dmodel is the dimension of the model. Each element of PPn∈RJ×dmodel is a real value between Multi-head Attention Nx Masked Multi-head Attention Outputs (Shifted right) Output Embedding Multi-head Attention Feed Forward Nx Inputs Input Embedding Positional Embedding Linear Softmax Output Probabilities Add & Norm Add & Norm Add & Norm Reordering Embedding Feed Forward Reordering Embedding Add & Norm Positional Embedding Add & Norm Figure 2: The architecture of Transformer with reordering embeddings. zero and one. Reordering Embeddings: PPn is used to penalize the original positional embeddings PE: REn = PE · PPn, (7) where REn is called reordered embedding (RE) because each element of PE is multiplied by a probability between zero and one. Achieving Reordering: The learned REn is further added to Hn to achieve reordering operations for the current sentence hidden state Hn: Cn = LN(Hn + REn), (8) where LN is a layer normalization. As a result, there is a reordering-aware sentence hidden state representation Cn. 4.2 Stacking SANs with Reordering Embeddings The original positional embeddings of a sentence allow the Transformer to avoid having to recurrently capture the order of dependencies between words, thus relying entirely on the stacked SANs to parallel learn sentence representations. The learned REs are similar to the original positional embeddings. This means that these learned reordering embeddings can be also easily stacked together with the existing SANs to learn 1791 the final reordering-aware sentence representation for machine translation. According to Eq.(4), stacking SANs with reordering embeddings is formalized as the following Eq.(9):   H n = LN(SelfAttn(Hn−1) + Hn−1) PPn = sigmoid(V n · tanh(Wn · Hn−1 + W n · H n)) Cn = LN(H n + PE · PPn) Hn = LN(FFNn(Cn) + H n),   N (9) where H0 is the initialized sentence representation as in the Section 3.1. Finally, there is a reorderingaware sentence representation HN for predicting translations. 5 Neural Machine Translation with Reordering Mechanism Based on the proposed approach to learning sentence representation, we design three Transformer translation models: Encoder REs, Decoder REs, and Both REs, all of which enable reordering knowledge to improve the translation performance of Transformer. Encoder REs: The proposed reordering mechanism is only applied to the encoder of Transformer to learn the representation of the source sentence, as shown in the Encoder of Figure 2. Decoder REs: Similarly, the proposed reordering mechanism is only introduced into the SAN layer of Transformer related to the representation of the target sentence, as shown in the Decoder of Figure 2. Both REs: To further enhance translation performance, we simultaneously apply the proposed method to the source and target sentences to learn their sentence representations, as shown in Figure 2. Note that the reordering model in PBSMT is an independent model and therefore needs to consider information concerning both the source and target. In NMT, the reordering embedding is jointly trained with the entire NMT model. Although it is only applied to the encoder (or decoder), it can still obtain information about the target (or source) from the decoder (or encoder) by neural network feedback. Therefore, the proposed reordering mechanism makes use of information concerning both the source and the target. 6 Experiments 6.1 Datasets The proposed method was evaluated on three tasks from the WMT14 English-to-German (EN-DE), NIST Chinese-to-English (ZH-EN), and WAT ASPEC Japanese-to-English (JA-EN) benchmarks. 1) For the EN-DE translation task, 4.43 million bilingual sentence pairs of the WMT14 dataset were used as training data, including Common Crawl, News Commentary, and Europarl v7. The newstest2013 and newstest2014 datasets were used as the dev set and test set, respectively. 2) For the ZH-EN translation task, the training dataset consisted of 1.28 million bilingual sentence pairs from LDC corpus consisting of LDC2002E18, LDC2003E07, LDC2003E14, and Hansard’s portions of LDC2004T07, LDC2004T08, and LDC2005T06. The MT06 and the MT02/MT03/MT04/MT05/MT08 datasets were used as the dev set and test set, respectively. 3) For the JA-EN translation task, the training dataset consisted of two million bilingual sentence pairs from the ASPEC corpus (Nakazawa et al., 2016). The dev set consisted of 1,790 sentence pairs and the test set of 1,812 sentence pairs. 6.2 Baseline Systems These baseline systems included: Transformer: a vanilla Transformer with absolute positional embedding (Vaswani et al., 2017), for example Transformer (base) and Transformer (big) models. Relative PE (Shaw et al., 2018): incorporates relative positional embeddings into the selfattention mechanism of Transformer. Additional PE (control experiment): uses original absolute positional embeddings to enhance the position information of each SAN layer instead of the proposed reordering embeddings. Pre-reordering: a pre-ordering method (Goto et al., 2013) for JA-EN translation task was used to adjust the order of Japanese words in both the training, dev, and test datasets, and thus reordered each source sentence into the similar order as its target sentence. 6.3 System Setting For all models (base), the byte pair encoding algorithm (Sennrich et al., 2016) was adopted and the size of the vocabulary was set to 32,000. The number of dimensions of all input and output 1792 System Architecture newstest2014 #Speed1 #Speed2 #Params Existing NMT systems Wu et al. (2016) GNMT 26.3 N/A N/A N/A Gehring et al. (2017b) CONVS2S 26.36 N/A N/A N/A Vaswani et al. (2017) Transformer (base) 27.3 N/A N/A 65.0M Vaswani et al. (2017) Transformer (big) 28.4 N/A N/A 213.0M Our NMT systems this work Transformer (base) 27.24 9910 181 97.6M +Additional PEs 27.10 9202 179 97.6M +Relative PEs 27.63 4418 146 97.6M +Encoder REs 28.03++ 8816 179 102.1M +Decoder REs 27.61+ 9101 175 102.1M +Both REs 28.22++ 8605 174 106.8M Transformer (big) 28.34 4345 154 272.8M +Both REs 29.11++ 3434 146 308.2M Table 1: Comparison with existing NMT systems on WMT14 EN-DE Translation Task. “#Speed1” and “#Speed2” denote the training and decoding speed measured in source tokens per second, respectively. In Table 1, 2 and 3, “++/+” after score indicate that the proposed method was significantly better than the corresponding baseline Transformer (base or big) at significance level p<0.01/0.05. layers was set to 512, and that of the inner feedforward neural network layer was set to 2048. The heads of all multi-head modules were set to eight in both encoder and decoder layers. In each training batch, a set of sentence pairs contained approximately 4096×4 source tokens and 4096×4 target tokens. During training, the value of label smoothing was set to 0.1, and the attention dropout and residual dropout were p = 0.1. The Adam optimizer (Kingma and Ba, 2014) was used to tune the parameters of the model. The learning rate was varied under a warm-up strategy with warmup steps of 8,000. For evaluation, we validated the model with an interval of 1,000 batches on the dev set. Following the training of 200,000 batches, the model with the highest BLEU score of the dev set was selected to evaluate on the test sets. During the decoding, the beam size was set to four. All models were trained and evaluated on a single P100 GPU. SacreBELU (Post, 2018) was used as the evaluation metric of EN-DE, and the multi-bleu.perl1 was used the evaluation metric of ZH-EN and JA-EN tasks. The signtest (Collins et al., 2005) was as statistical significance test. We re-implemented all methods (“this work” in the tables) on the OpenNMT toolkit (Klein et al., 1https://github.com/mosessmt/mosesdecoder/tree/RELEASE-4.0/scripts/generic/multibleu.perl 2017). 6.4 Main Results To validate the effectiveness of our methods, the proposed models were first evaluated on the WMT14 EN-DE translation task as in the original Transformer translation system (Vaswani et al., 2017). The main results of the translation are shown in Tables 1. We made the following observations: 1) The baseline Transformer (base) in this work outperformed GNMT, CONVS2S, and Transformer (base)+Relative PEs, and achieved performance comparable to the original Transformer (base). This indicates that it is a strong baseline NMT system. 2) The three proposed models significantly outperformed the baseline Transformer (base). This indicates that the learned reordering embeddings were beneficial for the Transformer. Meanwhile, our models outperformed the comparison system +Additional PEs (control experiment), which means that these improvements in translation derived from the learned REs instead of the original PEs. +Encoder REs and +Both REs were superior to +Relative PEs, which means that the REs better captured reordering information than +Relative PEs. 3) Of the proposed models, +Encoder REs 1793 System Architecture Test Sets #Param MT02 MT03 MT04 MT05 MT08 Existing NMT systems Vaswani et al. (2017) Transformer N/A N/A N/A N/A N/A N/A Zhang et al. (2017) RNNsearch+Distortion N/A 38.33 40.40 36.81 N/A N/A Meng and Zhang (2018) DTMT#1 46.90 45.85 46.78 45.96 36.58 170.5M Meng and Zhang (2018) DTMT#4 47.03 46.34 47.52 46.70 37.61 208.4M Kong et al. (2018) RNN-based NMT N/A 38.62 41.98 37.42 N/A 87.9M Zhao et al. (2018a) RNN-based NMT+MEM N/A 44.98 45.51 43.95 33.33 N/A Our NMT systems this work Transformer (base) 46.45 45.33 45.82 45.57 35.57 78.3M +Additional PEs 46.66 45.35 46.11 45.40 35.75 78.3M +Relative PEs 46.41 45.94 46.54 46.21 36.14 78.3M +Encoder REs 47.47++ 45.87++ 46.82++ 46.58++ 36.42++ 83.0M +Decoder REs 46.80 45.43 46.23++ 46.11++ 36.02+ 83.0M +Both REs 47.54++ 46.56++ 47.27++ 46.88++ 36.77++ 87.6M Transformer (Big) 47.76 46.66 47.51 47.71 37.73 244.7M +Both REs 48.42++ 47.32++ 48.22++ 48.56++ 38.19+ 269.7M Table 2: Results on NIST ZH-EN Translation Task. performed slightly better than +Decoder REs. This indicates that the reordering information of the source sentence was slightly more useful than that of the target sentence. +Both REs which combined reordering information for both source and target further improved performance and were significantly better than +Encoder REs and +Decoder REs. This indicates that the reordering information of source and target can be used together to improve predicted translation. 4) We also evaluated the best performing method (+Both REs) in big Transformer model settings (Vaswani et al., 2017). Compared with Transformer (base), Transformer (big) contains approximately three times parameters and obtained one BLEU score improvement. The Transformer (big)+Both REs further achieved 0.77 BLEU score improvement. 5) The proposed models contains approximately 5%∼10% additional parameters and decreased 10%∼15% training speed, compared to the corresponding baselines. Transformer (base)+Both REs achieved comparable results compared to Transformer (big) which has much more parameters. This indicates that the improvement of the proposed methods is not from more parameters. 6) In Table 3, the +Pre-ordering performed worse that the baseline Transformer (base) for the WAT JA-EN translation task. We assume that the simple +pre-ordering strategy has negative impact on the translation performance of NMT model, which is in line with the functional Systems testset #Param Transformer (base) 30.33 73.9M +Pre-Reordering 28.93 73.9M +Additional PEs 30.16 73.9M +Relative PEs 30.42 73.9M +Encoder REs 31.12++ 78.6M +Decoder REs 30.78+ 78.6M +Both REs 31.41++ 84.4M Transformer (big) 31.21 234.6M +Both REs 31.93++ 273.7M Table 3: Results for WAT JA-EN Translation Task. similarity findings in (Du and Way, 2017; Kawara et al., 2018). Conversely, the proposed methods performed better than the Transformer (base), especially the +pre-ordering. This means that because this pre-ordering operation is isolated with the existing NMT, these generated preordered data are not conducive to model source translation knowledge for the NMT framework. In addition, Tables 2 and 3 show that the proposed models yielded similar improvements over the baseline system and the compared methods on the NIST ZH-EN and WAT JA-EN translation tasks. These results indicate that our method can effectively improve the NIST ZH-EN and WAT JA-EN translation tasks. In other words, our approach is a universal method for improving the translation of other language pairs. 1794 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 0 10 20 30 Percentage BLEU Transformer (base) +Both REs Figure 3: The effect of reordering in the test set where the word orders are partially wrong for test set of ENDE. “Percentage” denotes that there is percentage of swapped words in one source sentence. 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 0 10 20 30 Percentage BLEU Transformer (base) +Both REs Figure 4: The effect of reordering in the test set where the word orders are partially wrong for test set of JAEN. 6.5 Effect of Reordering Embeddings Unlike the reordering model in PBSMT, which can be illustrated explicitly, it is challenging to explicitly show the effect of reordering embedding. To further analyze this effect, we simulated a scenario where the word order of a sentence was partially incorrect and reordering was needed for NMT. We randomly swapped words of a source sentence in the test set according to different percentages of incorrectly swapped words in a sentence. For example, “10%” indicates that there were 10% randomly swapped words for each source sentence in the test set. We evaluated Transformer (base) and +Both REs (base) on these test set for three translation tasks and the results are as shown in Figure 3, 4, and 5. 1) We observed that when the ratio of swapped words gradually increased, the performances 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 10 20 30 40 50 Percentage BLEU Transformer (base) +Both REs Figure 5: The effect of reordering in the test set where the word orders are partially wrong for test set of ZHEN. of Transformer (base) and +Both REs (base) significantly degraded. This indicates that correct ordering information has an important effect on the Transformer system. 2) When the percentage of swapped words was less than 40%, the NMT systems still delivered reasonable performance. The gap between +Both REs (base) and Transformer (base) was approximately 2-3 BLEU scores. This indicates that +Both REs (base) dealt better than the vanilla baseline with this scenario. In other words, the learned REs retrained part of reordering information in a sentence. 3) When the percentage of swapped words was greater than 40%, Transformer (base) and +Both REs (base) yielded poor performance on translation. We infer that excessive exchanges of word order may increase the ambiguity of the source sentence such that Transformer (base) and +Both REs (base) struggled to convert the original meaning of the source sentence into the target translation. 6.6 Cases Analysis Figure 6 shows two translation examples, which were generated by Transformer (base) model and +Both REs (base) model, respectively. For the first sample, +Both REs (base) translated the Chinese phrase “继续[continue] 改 革[reform] 的[to] 努力[efforts]” into the “the efforts to continue the reform” while Transformer 1795 Ref1: the efforts to continue reform will enhance the economic recovery Src1: 继续 改革 的 努力 将 促成 经济 复苏 [continue] [reform] [to] [efforts] [will] [enhance] [economic] [recovery] Transformer (base): continued reform efforts will bring about economic recovery +Both_REs (base): the efforts to continue the reform will promote economic recovery Ref2: nine people were killed in the incident Src2: 这 起 事件 造成 九 人 丧生 [the] [ ] [incident] [ ] [nine] [people] [killed] Transformer (base): the incident killed nine people +Both_REs (base): nine people were killed in the incident Figure 6: Two translation examples for ZH-EN task. In each example, the English phrases in color indicate they are translations from the corresponding Chinese phrase with the same color. (base) translated the Chinese phrase into “continued reform efforts”. Although both of them covered the meanings of main words, the order of the former translation is closer to the natural English word order. For the second sample, Transformer (base) generated a puzzling translation “the incident killed nine people”. It seems to be an English sentence in Chinese word order. In comparison, the +Both REs (base) translated it into “nine people were killed in the incident” which is the same as the reference. These two examples show that the proposed model with reordering embeddings was conducive to generating a translation in line with the target language word order. 7 Conclusion and Future Work Word ordering is an important issue in translation. However, it has not been extensively studied in NMT. In this paper, we proposed a reordering mechanism to capture knowledge of reordering. A reordering embedding was learned by considering the relationship between the positional embedding of a word and that of the entire sentence. The proposed reordering embedding can be easily introduced to the existing Transformer translation system to predict translations. Experiments showed that our method can significantly improve the performance of Transformer. In future work, we will further explore the effectiveness of the reordering mechanism and apply it to other natural language processing tasks, such dependency parsing (Zhang et al., 2016; Li et al., 2018), and semantic role labeling (He et al., 2018; Li et al., 2019). Acknowledgments We are grateful to the anonymous reviewers and the area chair for their insightful comments and suggestions. This work was partially conducted under the program “Promotion of Global Communications Plan: Research, Development, and Social Demonstration of Multilingual Speech Translation Technology” of the Ministry of Internal Affairs and Communications (MIC), Japan. Rui Wang was partially supported by JSPS grant-in-aid for early-career scientists (19K20354): “Unsupervised Neural Machine Translation in Universal Scenarios” and NICT tenure-track researcher startup fund “Toward Intelligent Machine Translation”. References Yaser Al-Onaizan and Kishore Papineni. 2006. Distortion models for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 529–536, Sydney, Australia. Association for Computational Linguistics. Lei Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. CoRR, abs/1607.06450. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA. Arianna Bisazza and Marcello Federico. 2016. A survey of word reordering in statistical machine translation: Computational models and language phenomena. Computational Linguistics, 42(2):163– 205. 1796 Ondrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (wmt18). In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, pages 272–307, Belgium, Brussels. Association for Computational Linguistics. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. Huadong Chen, Shujian Huang, David Chiang, and Jiajun Chen. 2017a. Improved neural machine translation with a syntax-aware encoder and decoder. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1936–1945, Vancouver, Canada. Association for Computational Linguistics. Kehai Chen, Rui Wang, Masao Utiyama, Lemao Liu, Akihiro Tamura, Eiichiro Sumita, and Tiejun Zhao. 2017b. Neural machine translation with source dependency representation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2846– 2852, Copenhagen, Denmark. Association for Computational Linguistics. Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, and Tiejun Zhao. 2018a. Syntax-directed attention for neural machine translation. In AAAI Conference on Artificial Intelligence, pages 4792– 4798, New Orleans, Lousiana, USA. Kehai Chen, Tiejun Zhao, Muyun Yang, and Lemao Liu. 2017c. Translation prediction with source dependency-based context representation. In AAAI Conference on Artificial Intelligence, pages 3166– 3172, San Francisco, California, USA. Kehai Chen, Tiejun Zhao, Muyun Yang, Lemao Liu, Akihiro Tamura, Rui Wang, Maosao Utiyama, and Eiichro Sumita. 2018b. A neural approach to source dependence based context model for statistical machine translation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(2):266–280. Colin Cherry. 2013. Improved reordering for phrase-based translation using sparse features. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 22–31, Atlanta, Georgia. Association for Computational Linguistics. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 263–270, Ann Arbor, Michigan. Association for Computational Linguistics. Michael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause restructuring for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 531–540, Ann Arbor, Michigan. Association for Computational Linguistics. Jinhua Du and Andy Way. 2017. Pre-Reordering for Neural Machine Translation: Helpful or Harmful? The Prague Bulletin of Mathematical Linguistics, 108:171–182. Nadir Durrani, Alexander Fraser, Helmut Schmid, Hieu Hoang, and Philipp Koehn. 2013. Can markov models over minimal translation units help phrasebased smt? In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 399– 405, Sofia, Bulgaria. Association for Computational Linguistics. Nadir Durrani, Philipp Koehn, Helmut Schmid, and Alexander Fraser. 2014. Investigating the usefulness of generalized word representations in smt. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 421–432, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Nadir Durrani, Helmut Schmid, and Alexander Fraser. 2011. A joint sequence translation model with integrated reordering. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1045–1054, Portland, Oregon, USA. Association for Computational Linguistics. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 823–833, Berlin, Germany. Association for Computational Linguistics. Minwei Feng, Arne Mauser, and Hermann Ney. 2010. A source-side decoding sequence model for statistical machine translation. In The Ninth Conference of the Association for Machine Translation in the Americas, Denver, Colorado. Minwei Feng, Jan-Thorsten Peter, and Hermann Ney. 2013. Advancements in reordering models for statistical machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 322–332, Sofia, Bulgaria. Association for Computational Linguistics. Michel Galley and Christopher D. Manning. 2008. A simple and effective hierarchical phrase reordering model. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 848–856, Honolulu, Hawaii. Association for Computational Linguistics. 1797 Jonas Gehring, Michael Auli, David Grangier, and Yann Dauphin. 2017a. A convolutional encoder model for neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 123–135, Vancouver, Canada. Association for Computational Linguistics. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017b. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1243–1252, International Convention Centre, Sydney, Australia. PMLR. Isao Goto, Masao Utiyama, and Eiichiro Sumita. 2013. Post-ordering by parsing with itg for japanese-english statistical machine translation. ACM Transactions on Asian Language Information Processing, 12(4):17:1–17:22. Spence Green, Michel Galley, and Christopher D. Manning. 2010. Improved models of distortion cost for statistical machine translation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 867–875, Los Angeles, California. Association for Computational Linguistics. Andreas Guta, Tamer Alkhouli, Jan-Thorsten Peter, Joern Wuebker, and Hermann Ney. 2015. A comparison between count and neural network models based on joint translation and reordering sequences. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1401–1411, Lisbon, Portugal. Association for Computational Linguistics. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770– 778. Shexia He, Zuchao Li, Hai Zhao, and Hongxiao Bai. 2018. Syntax for semantic role labeling, to be, or not to be. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2061–2071, Melbourne, Australia. Yuki Kawara, Chenhui Chu, and Yuki Arase. 2018. Recursive neural network based preordering for english-to-japanese machine translation. In Proceedings of ACL 2018, Student Research Workshop, pages 21–27, Melbourne, Australia. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67–72, Vancouver, Canada. Association for Computational Linguistics. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, Ednmonton, Canada. Xiang Kong, Zhaopeng Tu, Shuming Shi, Eduard H. Hovy, and Tong Zhang. 2018. Neural machine translation with adequacy-oriented learning. CoRR, abs/1811.08541. Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, and Guodong Zhou. 2017. Modeling source syntax for neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 688–697, Vancouver, Canada. Association for Computational Linguistics. Peng Li, Yang Liu, and Maosong Sun. 2013. Recursive autoencoders for itg-based translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 567–577, Seattle, Washington, USA. Association for Computational Linguistics. Peng Li, Yang Liu, Maosong Sun, Tatsuya Izuha, and Dakun Zhang. 2014. A neural reordering model for phrase-based translation. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1897–1907, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Zuchao Li, Jiaxun Cai, Shexia He, and Hai Zhao. 2018. Seq2seq dependency parsing. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3203–3214, Santa Fe, New Mexico, USA. Zuchao Li, Shexia He, Hai Zhao, Yiqing Zhang, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. 2019. Dependency or span, end-to-end uniform semantic role labeling. CoRR, abs/1901.05280. Benjamin Marie, Rui Wang, Atsushi Fujita, Masao Utiyama, and Eiichiro Sumita. 2018. Nict’s neural and statistical machine translation systems for the wmt18 news translation task. In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, pages 453–459, Belgium, Brussels. Association for Computational Linguistics. Fandong Meng and Jinchao Zhang. 2018. DTMT: A novel deep transition architecture for neural machine translation. CoRR, abs/1812.07807. 1798 Haitao Mi, Baskaran Sankaran, Zhiguo Wang, and Abe Ittycheriah. 2016. Coverage embedding models for neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 955–960, Austin, Texas. Association for Computational Linguistics. Masaaki Nagata, Kuniko Saito, Kazuhide Yamamoto, and Kazuteru Ohashi. 2006. A clustered global phrase reordering model for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 713–720, Sydney, Australia. Association for Computational Linguistics. Toshiaki Nakazawa, Manabu Yaguchi, Kiyotaka Uchimoto, Masao Utiyama, Eiichiro Sumita, Sadao Kurohashi, and Hitoshi Isahara. 2016. ASPEC: Asian scientific paper excerpt corpus. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 2204–2208, Portoroˇz, Slovenia. European Language Resources Association (ELRA). Matt Post. 2018. A call for clarity in reporting BLEU scores. CoRR, abs/1804.08771. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464– 468, New Orleans, Louisiana. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Curran Associates, Inc. Christoph Tillman. 2004. A unigram orientation model for statistical machine translation. In Proceedings of HLT-NAACL 2004: Short Papers, Stroudsburg, PA, USA. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 76–85, Berlin, Germany. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Rui Wang, Andrew Finch, Masao Utiyama, and Eiichiro Sumita. 2017a. Sentence embedding for neural machine translation domain adaptation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 560–566, Vancouver, Canada. Association for Computational Linguistics. Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017b. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1482–1488, Copenhagen, Denmark. Association for Computational Linguistics. Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2018. Dynamic sentence sampling for efficient training of neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 298–304, Melbourne, Australia. Association for Computational Linguistics. Dekai Wu. 1996. A polynomial-time algorithm for statistical machine translation. In Proceedings of the 34th Annual Meeting on Association for Computational Linguistics, ACL ’96, pages 152– 158, Santa Cruz, California. Association for Computational Linguistics. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3). Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum entropy based phrase reordering model for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 521–528, Sydney, Australia. Association for Computational Linguistics. 1799 Richard Zens and Hermann Ney. 2006. Discriminative reordering models for statistical machine translation. In Proceedings on the Workshop on Statistical Machine Translation, pages 55–63, New York City. Association for Computational Linguistics. Jinchao Zhang, Mingxuan Wang, Qun Liu, and Jie Zhou. 2017. Incorporating word reordering knowledge into attention-based neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1524–1534, Vancouver, Canada. Association for Computational Linguistics. Zhisong Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita, and Hai Zhao. 2018. Exploring recombination for efficient decoding of neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4785–4790, Brussels, Belgium. Association for Computational Linguistics. Zhisong Zhang, Hai Zhao, and Lianhui Qin. 2016. Probabilistic graph-based dependency parsing with convolutional neural network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1382–1392, Berlin, Germany. Yang Zhao, Jiajun Zhang, Zhongjun He, Chengqing Zong, and Hua Wu. 2018a. Addressing troublesome words in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 391–400, Brussels, Belgium. Association for Computational Linguistics. Yang Zhao, Jiajun Zhang, and Chengqing Zong. 2018b. Exploiting pre-ordering for neural machine translation. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. European Language Resource Association.
2019
174
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1800–1809 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1800 Neural Fuzzy Repair: Integrating Fuzzy Matches into Neural Machine Translation Bram Bult´e Centre for Computational Linguistics (CCL) KU Leuven [email protected] Arda Tezcan Language and Translation Technology Team (LT3) Ghent University [email protected] Abstract We present a simple yet powerful data augmentation method for boosting Neural Machine Translation (NMT) performance by leveraging information retrieved from a Translation Memory (TM). We propose and test two methods for augmenting NMT training data with fuzzy TM matches. Tests on the DGTTM data set for two language pairs show consistent and substantial improvements over a range of baseline systems. The results suggest that this method is promising for any translation environment in which a sizeable TM is available and a certain amount of repetition across translations is to be expected, especially considering its ease of implementation. 1 Introduction Even though Machine Translation (MT) quality may have increased considerably over the past years, most notably with advances in the field of Neural Machine Translation (NMT), Translation Memories (TMs) still offer some advantages over MT systems. They are not only able to translate previously seen sentences ‘perfectly’ but they also offer ‘near perfect’ translation quality when highly similar source sentences are retrieved from the TM. As a result, in Computer-Assisted Translation (CAT) workflows, the MT system is often used as a backoff mechanism when the TM fails to retrieve high fuzzy matches above a certain threshold (Rossi and Chevrot, 2019; Federico et al., 2012), even though it has been shown that this basic integration method is not always the most optimal TM-MT combination strategy (Simard and Isabelle, 2009). Our aim in this paper is to integrate the advantages of TMs into NMT systems in order to improve MT quality by utilizing existing translations for highly similar source sentences in a given TM. We propose a simple method for TM-NMT integration that is based on augmenting the source data with retrieved fuzzy TM targets by means of concatenation. We train both dedicated Neural Fuzzy Repair (NFR) systems that deal specifically with query sentences for which a (sufficiently high-scoring) match is found in the TM as well as unified systems capable of translating any query sentence. Several configurations are tested on the DGT-TM data set (Steinberger et al., 2013) for the language directions English into Dutch (EN→NL) and English into Hungarian (EN→HU). In the next section, we provide an overview of previous research on TM-MT integration. Section 3 details the approach proposed in this paper. The experimental setup is presented in section 4, and the results in section 5. This is followed by the discussion (section 6) and conclusion (section 7). 2 Research background The idea to combine the advantages of TM and MT is certainly not new. Early TM-MT integration approaches made use of example-based MT systems (Simard and Langlais, 2001) or focused on editing high-scoring TM matches (Hewavitharana et al., 2005). Editing TM matches (or fuzzy repair) proved to be beneficial for the quality of MT output, as demonstrated in later studies that also implemented such an approach (Ortega et al., 2016). Alternatively, phrase-based statistical MT (PBSMT) systems have been augmented with TM information by constraining the output to contain (parts of) retrieved TM matches (Koehn and Senellart, 2010a), by enriching the system’s phrase table (Bic¸ici and Dymetman, 2008; Simard and Isabelle, 2009), or by adapting the PBSMT system itself (Wang et al., 2013), all leading to significantly better performance. 1801 More recently, with the rise of NMT, researchers focused on ways to incorporate TM information in neural MT architectures. For example, this has been attempted by means of a lexical memory added to the NMT system (Feng et al., 2017), lexical constraints imposed on the NMT search algorithms (Hokamp and Liu, 2017), rewards attached to retrieved and matched translation pieces that guide the NMT output (Zhang et al., 2018), by explicitly providing the NMT system with access to a list of retrieved TM matches during decoding (Gu et al., 2018), or by adding an extra encoder for retrieved TM matches (Cao and Xiong, 2018). In all cases, this resulted in impressive gains in estimated translation quality. All of these TM-NMT integration approaches either alter the search algorithms at decoding or change the architecture of the NMT system by combining information from multiple encoders. Our method is different in that it only involves a change in data preprocessing, without altering the NMT system itself. The proposed change at preprocessing is inspired by research on Automatic Post-Editing (APE) of MT output as well as multisource machine translation. In the context of APE, NMT engines have been trained with a concatenation of source sentence and MT output at the source side, with a specific break token separating the two strings (Hokamp, 2017). A similar simple concatenation approach has also been used to take advantage of multiple source languages to increase the quality of NMT output (Dabre et al., 2017). In both cases, the NMT systems managed to process these augmented inputs successfully. In the next section, we describe the TM-NMT integration approach followed in this paper. 3 Neural Fuzzy Repair We present a simple approach to TM-NMT integration, based on augmenting source sentences with fuzzy matches retrieved from a TM, and training dedicated or unified NMT systems. First, we present the TM system and method for fuzzy match retrieval. We then describe how we augment the input that is used to train an NMT system, which is presented next. 3.1 TM and Fuzzy match retrieval Our TM consists of any set M of source and target sentence pairs (S, T); the same sentences that would be used as training data for an MT system. Each source sentence si ∈S is compared to all other source sentences sj ∈S using a similarity metric Sim. The fuzzy source sentences S′ i ∈S that match a given source sentence si with a similarity score higher than the specified threshold λ are stored in the set Fsi together with their corresponding target sentences T ′ i ∈T (Sim(si, sj) ≥ λ). Perfect matches (Sim(si, sj) = 1) are excluded from Fsi. We use token-based edit distance (Levenshtein, 1966) as primary match metric for the tests in this paper1, based on the work of Hyyr¨o (2001). Since extracting fuzzy matches from a large TM using edit distance is computationally costly2, we attempt to speed up this process in three ways. First, for each source sentence we extract candidates using the SetSimilaritySearch3 library for Python and calculate editdistance only on the extracted candidates (sss+ed). SetSimilaritySearch offers a vector similarity search algorithm based on indexing and optimization strategies that does not rely on approximation methods, and offers performance gains over a number of inverted listbased approaches and signature-based methods (Bayardo et al., 2007). To extract candidates for high fuzzy matches with SetSimilaritySearch, we use the similarity measure containmentmax, which is defined as follows: containmentmax(vi, vj) = ∥vi ∩vj∥ max(∥vi∥, vj∥∥) where vi and vj are two vectors consisting of unique tokens obtained from two sentences si and sj, respectively. Second, we only calculate the editdistance score for the n-best candidates extracted by SetSimilaritySearch (sss nbest+ed). Finally, we use multi-threading (sss nbest+ed(mt)). In Section 5.1 we evaluate what impact these three techniques have on the speed of retrieval and the number of matches retrieved. 3.2 Source augmentation For each source sentence si for which at least one sufficiently high-scoring match is found in the TM (i.e. Fsi ̸= ∅), an augmented source xi is generated according to one of the following formats, 1https://github.com/aflc/editdistance. This metric can be replaced by other alternatives in the literature (Bloodgood and Strauss, 2015). 2Extracting fuzzy matches for all source sentences in a data set consisting of 20K sentences took roughly 1 hour (3996 seconds) on a 2.50GHz Intel Xeon E5 core. 3https://github.com/ardate/SetSimilaritySearch 1802 while preserving the original target sentence ti: • format 1: xi : si @@@ t′ 1 • format 2: xi : si @@@ t′ 1 @@@ t′ 2 • format 3: xi : si @@@ t′ 1 @@@ t′ 2 @@@ t′ 3 where t′ 1 represents the target side of the highest scoring match s′ 1 in Fsi, and t′ 2 and t′ 3 the target side of the second and third highest scoring matches s′ 2 and s′ 3, respectively. We use ‘@@@’ as break token marking the boundary between two sentences. For formats 2 and 3, in case Fsi does not contain at least either 2 or 3 elements, the corresponding empty slots are left blank. Each augmented source xi, coupled with its original target sentence ti taken from M, is stored in the new set M′ = (X, T). In addition to using format 1 as described above, we also test an alternative configuration ‘format 1 n-best’, in which we include augmentedsource/target pairs (Xn, T) in M′ by utilizing the n-best matches for a given si. For example, with this alternative configuration, when n = 3, X n contains the following augmented source for each si, which are paired with the original target sentence ti.: • format 1 n-best: x1 i : si @@@ t′ 1 x2 i : si @@@ t′ 2 x3 i : si @@@ t′ 3 This alternative configuration only affects the training set M′ and does not change the way test sentences are handled. For all different values of n, the source sentences in the test set are augmented with the translation of the best possible fuzzy match t′ 1. The different data augmentation strategies described above potentially lead to different sizes of training data sets (see Section 5.2). 3.3 NMT system We use OpenNMT (Klein et al., 2017) with close to standard settings to train our NFR systems. For example, we kept the default optimizer (sgd), learning rate (1.0), word embedding size (500 for source and target), batch size (64) and dropout probability (0.3). We did, however, change a number of parameters related to data preprocessing and training. The maximum source and target length at preprocessing are set to 300 and 100, respectively, and the source vocabulary size is doubled to 100K (since the augmented source input X are bilingual). We train seq2seq bidirectional RNN models with global attention, and increased the hidden LSTM layer nodes to 750 (from 500), training steps to 200K (from 100K) and learning rate decay to 0.8 (from 0.5). 3.4 Integration Two methods for integrating the augmented training set M′ in the NMT workflow are tested based on the different formats described in Section 3.2. We create: • two separate NMT systems, a backoff NMT system with M as training data and a dedicated NFR system with only M′ as training data, or • one unified NFR system that uses the union of sets M and M′ as training data. We retrieve fuzzy matches for each query sentence qi in the test set Q, by comparing them to each sj in the training set M in line with the method described under 3.1. In case at least one match is found for which Sim(qi, sj) ≥λ, an augmented query input y is generated according to the method described under 3.2. As the dedicated system is only capable of translating y, it is combined with a backoff system capable of translating q, in order to translate all source sentences in a given test set. On the other hand, the unified system, which can be considered a simpler alternative to the backoff integration method, can translate both q and y. 4 Experimental setup In this section we describe the baseline systems our NFR systems are compared with, the data, and evaluation. 4.1 Baseline systems We compare the NFR systems to five baselines: (a) a standard NMT model, (b) a phrase-based SMT system, (c) TM matching, (d) a previously developed hybrid TM-SMT system (Bult´e et al., 2018), and (e) Google Translate4. The baseline NMT system is the backoff NMT system with M as training data as described in 4February, 2019. 1803 Section 3.4. As SMT baseline we train a Moses engine (Koehn et al., 2007) with the sentence pairs in M, using standard settings5. TM matching simply means selecting the highest scoring TM target t′ 1 for each query sentence qi. Finally, we include Google Translate as an example of a widely used NMT system, which is not trained with domainspecific data, unlike the other baseline systems. 4.2 Data We use the TM of the Directorate-General for Translation of the European Commission (Steinberger et al., 2013) for two language pairs: English into Dutch and English into Hungarian. All sentences were tokenized using the Moses toolkit as well as lowercased prior to training. We randomly divide the data into a training set (approx. 2.4M sentence pairs), two development sets (3000 sentence pairs each) and a test set (3207 sentences). The first development set is used for validation during training of the NMT systems and for tuning the SMT systems; the second development set is used to test the performance of different NFR configurations. Test sentences for which a perfect match was found in either the training or one of the development sets were removed. We ensured that the source side for all data sets was identical for both language pairs. We use pure token-based editdistance to extract fuzzy matches for the source sentences in the two development sets and the test set, considering their relatively small size. We use editdistance with candidate selection using SetSimilaritySearch to extract matches in the training set (see Section 3.1). Table 1 shows the percentage of query sentences in the test set for which fuzzy matches are found in different match ranges (i.e. <50, 50-59 ... 90-99). Since the source sentences in the test and training sets are the same for both language pairs, the values apply to both EN-NL and EN-HU. < 50 50-59 60-69 70-79 80-89 90-99 41.3% 11.4% 10.3% 8.8% 14.2% 14.0% Table 1: Percentage of test sentences per fuzzy match range (n=3207). For 58.7% of the sentences in the test set a match of 50% or higher was found in the TM, with proportionally most matches occurring in the highest match ranges. 55-gram KenLM, distortion limit = 6, max. phrase length = 7. < 50 50-59 60-69 70-79 80-89 90-99 32.9 21.0 21.1 24.4 23.4 33.1 Table 2: Average number of source tokens per sentence, per fuzzy match range. Table 2 shows the average number of source tokens per sentence for each fuzzy match range. On average, the longest sentences are found at both ends of the fuzzy match scale, i.e. the highest match range and the subset of sentences without fuzzy match higher than 50%, with approximately 33 tokens per sentence. In the other match ranges, sentences are around 10 tokens shorter. 4.3 Evaluation Three automated evaluation metrics are used: BLEU6 (Papineni et al., 2002), TER7 (Snover et al., 2006), and METEOR8 (Lavie and Agarwal, 2007). There is one reference translation per test sentence. BLEU scores are used as the primary evaluation metric, and the significance of performance differences in terms of BLEU scores between systems is tested using bootstrap resampling (Koehn, 2004). All evaluations are carried out on tokenized data. 5 Results In this section we describe the impact of our fuzzy matching technique on the speed of retrieval and the quantity of retrieved matches (5.1), the outcome of the NFR system selection (5.2), the final results on the test set (5.3), as well as the effect of the size of the TM on the performance of the NFR system (5.4). 5.1 Fuzzy match retrieval Table 3 shows the fuzzy match extraction time for four different approaches, as defined in Section 3.1, on three different sizes of data sets. To analyze the fuzzy matching speed of these different approaches, we extracted a maximum of 5 fuzzy matches for each source sentence and used λ = 0.5 as threshold for both editdistance and SetSimilaritySearch. Relatively small subsets (randomly extracted 5K, 10K and 20K sentence pairs) of the original training data were used for these tests. The table also shows the relative fuzzy matching 6Moses multi-bleu.perl script. 7Version 0.7.25: https://github.com/snover/terp 8Version 1.5: https://www.cs.cmu.edu/∼alavie/METEOR/ 1804 speed of the three different methods compared to editdistance alone, on the data set containing 20K sentence pairs (%20K). Method 5K 10K 20K %20K ed 303 1071 3996 100% sss+ed 15 54 158 3,95% sss n20+ed 7 27 100 2,50% sss n20+ed(16t) 1 3 10 0,25% Table 3: Fuzzy matching speed (seconds) on 5, 10 and 20 thousand sentence pairs using four different methods. n20 refers to 20-best candidates and 16t to multithreading with 16 threads. By using the three techniques described in Section 3.1, we reduced the fuzzy matching time on the training set to 0,25% of the time it takes to extract matches using only editdistance on the 20K data set. Using the sss nbest+ed(mt) method, we extracted all fuzzy matches for all source sentences per training set described in Section 4.2 in approximately 24 hours9. While taking n-best match candidates reduces the number of editdistance calculations, depending on the value of n, it also potentially leads to a loss of training data. Table 4 provides the percentage of source sentences for which no fuzzy matches are found above the editdistance threshold of 0.5 using three different matching methods. Method 5K 10K 20K ed 78,86% 75,88% 71,56% sss+ed 78,86% 75,88% 71,56% sss n20+ed 78,92% 75,97% 71,73% Table 4: Percentage of source sentences without fuzzy matches above the editdistance score of 0.5, in sets of 5, 10 and 20 thousand sentence pairs. The results in Table 4 indicate that calculating editdistance only on the candidates extracted by SetSimilaritySearch does not lead to data loss in these three data sets. Limiting the candidate list to 20-best candidates, however, slightly increases the number of sentences for which no fuzzy matches are found. Even though the increase seems minimal for these three relatively small data sets (i.e. 0,06%, 0,09% and 0,17% for 5, 10 and 20 thousand sentence pairs respectively), there is an increasing trend with increasing data size. 9Using 2000-best candidates and 16 threads. 5.2 NFR system selection We use the second development set to test different NFR configurations. For the sake of these tests, we fix the minimum fuzzy-match threshold λ to 0.5. Six different dedicated NFR systems and three unified systems are compared. We test two parameters: the augmented input format (F1-F3), and the n-best matches included per source sentence using format 1 (F1 n-best 1-3), as described in Section 3.2. The best-scoring NFR systems are selected for the final evaluation on the basis of the test set. Table 5 provides the results of the evaluation on the second development set for the baseline systems and the dedicated and unified NFR systems for both language pairs. Here we only consider the subset of sentences for which a match was found in the TM with a match score higher than 0.5 (2266 sentences), and only look at BLEU scores. Table 5 also shows the size of the training set for each system configuration, given that the different configurations lead to training data sets of varying sizes (see Section 3.2). BLEU scores Train System EN-NL EN-HU set Baseline NMT 64.16 53.52 2.4M Baseline SMT 68.99 46.41 2.4M Baseline TM 69.92 60.12 Google Translate 49.84 39.55 N/A Dedicated F1 1-best 79.22 68.35 1.8M Dedicated F1 2-best 78.95 68.25 3.2M Dedicated F1 3-best 78.70 68.77 4.5M Dedicated F2 79.31 68.69 1.8M Dedicated F3 79.33 68.45 1.8M Unified F1 1-best 78.59 67.35 4.2M Unified F2 78.96 67.56 4.2M Unified F3 79.06 67.65 4.2M Table 5: BLEU scores on the development set for sentences with at least one fuzzy match above the threshold of 0.5, and size of training data set, per system. For EN-NL, all NFR systems score between 8.35 and 9.41 BLEU points higher than the best baseline system (TM) for this subset of sentences. Only 0.74 BLEU points separate the worst and the best performing NFR system. Dedicated F3 obtained the best BLEU score, closely followed by Dedicated F2. Unified F3 also slightly outperforms the other unified systems trained with the second and the first data format. Also for EN-HU there is only 1.42 BLEU points difference between the worst and best scoring NFR system. Here, the best NFR system out1805 performs the best baseline (TM) by 8.65 BLEU points. We note that the TM baseline in itself scores 6.6 BLEU points higher than the best MT baseline (NMT). The dedicated NFR system F1 3best attains the highest BLEU score. 5.3 Test set evaluation Table 6 contains the results for EN-NL for the entire test set (3207 sentences). The dedicated NFR + NMT backoff approach outperforms all baseline systems, scoring +3.19 BLEU, -3.6 TER and +1.87 METEOR points compared to the best baseline (TM-SMT). Compared to the NMT baseline, the difference is 7.46 BLEU points. The best unified NFR system (Unified F3) scores only slightly worse than the approach with a dedicated NFR system and NMT backoff. Both NFR systems score significantly higher than the best baseline in terms of BLEU (p < 0.001). We note that the baseline SMT outperforms the baseline NMT, which in turn obtains better scores than Google Translate on this data set. System BLEU TER MET. Baseline NMT 51.45 36.21 69.83 Baseline SMT 54.21 35.99 71.28 Baseline TM-SMT 55.72 34.96 72.25 Google Translate 44.37 41.51 65.07 Best NFR + NMT backoff 58.91 31.36 74.12 Best NFR unified 58.60 31.57 73.96 Table 6: Test results EN-NL (all sentences). The results for EN-HU (Table 7) show a similar overall picture, with an even clearer advantage for the NFR systems. The best dedicated NFR system with NMT backoff (Dedicated F1 3best) scores 7.06 BLEU points more than the best baseline (TM-SMT), and also yields considerable improvements in terms of TER (-5.34) and METEOR (+4.46). The unified NFR system scores only 0.41 BLEU points lower than the dedicated NFR+backoff system. Also for this language pair the differences in BLEU scores between both NFR systems and the best baseline system are statistically significant (p < 0.001). The TM-SMT system is the best baseline in terms of BLEU and METEOR (but not in terms of TER, with the baseline NMT system scoring over 4.5 points better). In contrast to the EN-NL tests, where the SMT system scored better than the NMT system, the baseline NMT for EN-HU obtains a higher translation quality than the SMT baseline. Moreover, Google Translate gives comparable results to those of the baseline SMT system (better in terms of TER but worse in terms of BLEU and METEOR). System BLEU TER MET. Baseline NMT 40.47 45.45 57.68 Baseline SMT 33.65 54.76 53.96 Baseline TM-SMT 41.18 49.98 58.67 Google Translate 32.11 52.99 51.40 Best NFR + NMT backoff 48.24 40.11 63.13 Best NFR unified 47.83 40.14 62.77 Table 7: Test results EN-HU (all sentences). Next we look at the performance of the different systems on different subsets of the test set classified according to the best fuzzy match score (Table 8). For EN-NL, both NFR systems outperform all baselines in all match ranges from 0.6 onward. In the match range 0.5-0.59, the SMT and TMSMT baselines obtain higher BLEU scores than both NFR systems. For EN-HU, the NFR systems outperform all baselines in all match ranges except for No match. The scores of both NFR systems for both language pairs consistently increase across increasing match ranges, a pattern which is also followed by the TM baseline. We note that the NFR systems, also in the highest match range, clearly outperform the TM baselines for both language pairs. If we disregard the TM and TM-SMT baselines and only look at the ‘pure’ MT baselines, the difference between the NFR systems and the MT baselines consistently becomes larger with increasing fuzzy match score, for both language pairs. In the highest match range (i.e. 0.9 - 0.99), the increase in BLEU scores compared to the NMT baseline is 21.95 points for EN-NL and 22.76 points for EN-HU. In the range 0.8 - 0.89 this is 15.68 and 17.7 BLEU points respectively. As Table 2 showed, there is no correlation between fuzzy match range and average sentence length, which means that decreasing average sentence length is not an explanation for the increasing performance of the NFR systems with increasing fuzzy match scores. The results suggest that from a fuzzy match score of between 0.5 and 0.6 onward, it becomes advantageous to use an NFR system using the data sets in this study. For those sentences in the test set for which no match higher than the given threshold (λ ≥0.5) was found in the training set (No match), the unified NFR system performs slightly worse than the best baselines for translation into both Dutch (0.34 BLEU) and Hungarian (-0.74 BLEU). Note 1806 System EN-NL EN-HU No match Baseline NMT 40.77 29.76 Baseline SMT 40.87 23.21 Baseline TM-SMT 39.71 23.49 Google Translate 39.02 27.3 Best NFR unified 40.53 29.02 0.5-0.59 Baseline NMT 51.14 39.23 Baseline SMT 54.11 33.50 Baseline TM 34.23 28.38 Baseline TM-SMT 53.86 34.67 Google Translate 43.24 32.16 Best NFR dedicated 50.21 40.28 Best NFR unified 51.55 42.61 0.6-0.69 Baseline NMT 56.72 44.73 Baseline SMT 61.86 39.07 Baseline TM 49.56 40.67 Baseline TM-SMT 61.75 41.81 Google Translate 49.82 35.32 Best NFR dedicated 65.31 52.13 Best NFR unified 63.76 53.14 0.7-0.79 Baseline NMT 57.59 45.75 Baseline SMT 64.84 40.54 Baseline TM 61.52 49.39 Baseline TM-SMT 66.22 48.82 Google Translate 46.79 36.32 Best NFR dedicated 73.12 59.29 Best NFR unified 72.78 57.73 0.8-0.89 Baseline NMT 67.01 55.91 Baseline SMT 71.14 47.69 Baseline TM 69.66 61.89 Baseline TM-SMT 70.28 60.81 Google Translate 52.89 38.54 Best NFR dedicated 82.69 73.27 Best NFR unified 82.09 73.61 0.9-0.99 Baseline NMT 65.95 56.16 Baseline SMT 71.49 47.12 Baseline TM 83.77 74.67 Baseline TM-SMT 83.49 75.24 Google Translate 50.92 38.29 Best NFR dedicated 87.90 78.92 Best NFR unified 87.41 77.59 All ≥0.5 Baseline NMT 61.28 50.28 Baseline SMT 66.27 43.09 Baseline TM 64.63 55.94 Baseline TM-SMT 70.19 56.89 Google Translate 49.38 36.66 Best NFR dedicated 75.31 64.85 Best NFR unified 74.96 64.78 Table 8: Test results (BLEU scores, different match ranges). that for this subset of test sentences the performance of the different MT systems is highly comparable for EN-NL. In this match range, for example, also Google Translate scores only 1.85 BLEU points lower than the best-scoring system (SMT). For EN-HU, SMT is clearly outperformed by both NMT and Google Translate in the No match range. 5.4 Effect of TM size Considering that the success of the NFR systems depends on the amount of highly-similar matches retrieved from the TM, we examine the effect of different TM sizes by evaluating the performance of baseline NMT and the best unified NFR system for increasingly smaller subsets of our original EN-NL data set. Figure 1 shows the translation quality for the baseline NMT and the best unified NFR system (Unified F3) for five different TM sizes, which are indicated as percentages of the original TM size (i.e. approx. 2.4M sentence pairs)10, as well as the percentage of source sentences in the test set for which similar sentences are retrieved above the similarity threshold (λ ≥0.5). 45,5 48,01 49,3 50,02 50,07 45,86 51,17 52,65 54,63 57,49 27,81 34,7 41,9 49,7 58,7 Size of TM 25 35 45 55 65 6,25% 12,5% 25% 50% 100% NMT baseline (BLEU) Unified NFR (BLEU) Unified NFR (% sent. with matches) Figure 1: Effect of TM size on translation quality (BLEU) and number of ‘similar’ matches retrieved from TM. The NFR system outperforms the baseline NMT system for all TM sizes. The difference in BLEU scores between the two systems becomes more outspoken starting from 12.5% of the original TM size (i.e. approx. 300K sentence pairs), when for 35% of the sentences in the test set a similar match is retrieved from the TM. We note that the NFR system built with 12.5% of the original TM size yields higher BLEU scores than the baseline NMT system trained with the full TM (51.17 vs. 50.07). 6 Discussion The results of this study confirm that integrating TM information in NMT systems can result in significantly better translation quality, as demonstrated in a number of previous studies (Cao and 10In this experiment we used 100K steps (instead of 200K steps) to speed up training, which led to a slight decrease in BLEU scores for the systems built using the original TM (100%). 1807 Xiong, 2018; Hokamp and Liu, 2017; Gu et al., 2018; Zhang et al., 2018). The main novelty of our approach is that it only involves data preprocessing, without altering the architecture (e.g. by adding additional encoders) or algorithms of the NMT system. This makes our method easy to implement, since it is compatible with any ‘standard’ or out-of-the-box NMT system. This should allow for a smoother implementation and wider adoption. The NFR systems proposed in this study not only outperform all MT baselines, they also obtain better scores than the TM baseline in all fuzzy match ranges (including the highest ones). This shows that the NFR systems not only successfully exploit the information from TM matches, but go beyond this and effectively succeed in ‘repairing’ the fuzzy matches, at least to a certain extent. We argue that, for this reason, NFR systems (or, more generally speaking, systems offering NMT-TM integration) might gradually replace TM retrieval in CAT workflows in the future, where MT is currently still often used as a backoff option (Rossi and Chevrot, 2019; Federico et al., 2012). The fact that the MT baselines in our study do not obtain better scores than ‘pure’ TM retrieval in the higher match ranges (i.e. 0.8-0.99 for EN-NL and 0.7-0.99 for EN-HU) appears to confirm why this is still the case. Moreover, it is possible that NFR systems help to lower the resistance some translators have to adopting MT (Cadwell et al., 2018), especially when the TM-origins of parts of the MT output are marked by using automatic wordalignment methods (Bult´e et al., 2018), since this could potentially increase translators’ confidence in the quality of automatically generated translations. Even though we only performed a limited number of tests on one data set, the results show that the NFR system is successful for two language pairs, EN-NL and EN-HU, in spite of the typological differences between the two target languages. Moreover, the results of the system selection procedure reveal that the NFR system is rather robust, in that different configurations yield comparable results, and all lead to significant improvements in estimated translation quality. While combining the dedicated NFR with the baseline NMT systems yielded the best results for both language pairs, the unified NFR systems achieve comparable BLEU gains over the baseline NMT systems. As a result, the unified NFR systems offer yet a simpler alternative to the baseline NMT systems due to their ability to translate all source sentences. The analyses per match range reveal that using an NFR system starts being advantageous with fuzzy match scores between 0.5 and 0.6. It seems logical that any TM-based method is only suited for contexts with a sizeable TM and with a certain expected degree of repetition and overlap in the data. The tests related to training data size show, however, that with smaller TMs this method is still beneficial. For example, an NFR system built only with 1/8th of the original data set still achieved higher BLEU scores than the baseline NMT system trained on the full data set. We can argue that the most important factor for the NFR systems proposed in this study is the amount of overlap between the training and query sentences. Looking at the performance of the baseline MT systems, and in particular the relationship between SMT and NMT, there is a clear difference between the two target languages. The EN-NL SMT outperforms NMT by almost 3 BLEU points when evaluating the complete test set and obtains better scores in each of the match ranges (Table 6). The opposite is true for EN-HU, for which the NMT baseline outperforms the SMT baseline by almost 7 BLEU points on the whole test set (Table 7), a trend which is also visible in all match ranges (Table 8). Our findings are in line with those of Koehn et al. (2009), who compare the SMT quality of 462 language pairs and report generally lower SMT quality when translating into morphologically rich languages, such as Hungarian, Finnish and Estonian. The poorer translation quality of the EN-HU SMT in this study can potentially be attributed to the fact that a rich morphology (involving inflections and derivations) leads to an increase in vocabulary size and an overall data sparsity problem, which brings about additional challenges to the‘standard’ phrase-based SMT systems that rely on explicit phrase alignments on surface forms (Koehn, 2009). Instead of relying on surface forms, NMT systems utilize distributed, abstract word representations that can capture syntactic and semantic relationship between words, which could (partly) explain their relative success on the EN-HU language pair. In relation to the speed of fuzzy match retrieval, which can be an issue when matches have to be retrieved for all source sentences in a TM, the re1808 sults suggest that SetSimilaritySearch can be used as a fast proxy to editdistance. However, in this context it is important to strike the right balance between processing time and loss of training data by using different values for minimum similarity score and n-best candidates for SetSimilaritySearch. It still needs to be tested how well the NFR system works with other fuzzy matching metrics (Vanallemeersch and Vandeghinste, 2015), and how fast fuzzy matches can be retrieved from a TM with alternative methods, such as using the off-the-shelf search engine Apache Lucene (Gu et al., 2018; Zhang et al., 2018) or other approximate string matching methods (Koehn and Senellart, 2010b; Navarro, 2001). 7 Conclusion The TM-NMT integration approach presented in this paper, Neural Fuzzy Repair, makes use of data augmentation to help improve machine translation quality using information retrieved from a translation memory. Compared to previous approaches to incorporate TM information into MT systems, NFR does not require different NMT architectures or algorithms, but relies solely on input preprocessing, and can thus be used in combination with any existing NMT system or toolkit. Tests on two language pairs (EN-NL and EN-HU) showed that this method can achieve substantial gains in estimated translation quality compared to a range of baseline systems, even for relatively small training set sizes. We believe that the ease of implementation of NFR could lead to the wider adoption of TM-NMT integration. In a next step, we plan to compare the performance of NFR to other approaches to TM-NMT integration, for example by carrying out evaluations on the JRC-Acquis corpus (Gu et al., 2018; Koehn and Senellart, 2010a; Zhang et al., 2018). The approach also needs to be tested on data sets with a lower frequency of repeated sentences, other language pairs as well as different domains, ultimately also involving human evaluation (both in term of perceived quality and post-editing time). In addition, it would be informative to carry out a qualitative analysis of the NFR output in terms of how and to what extent the information contained in the fuzzy matches is used in the final translation, in comparison with the NMT baseline. We also intend to carry out further tests to potentially improve the quality of the output, for example by testing different match metrics and retrieval methods, NMT architectures (e.g. transformer), ways to include alignment information and by applying additional morphological preprocessing. References Roberto J. Bayardo, Yiming Ma, and Ramakrishnan Srikant. 2007. Scaling up all pairs similarity search. In Proceedings of the 16th International Conference on World Wide Web, pages 131–140. Ergun Bic¸ici and Marc Dymetman. 2008. Dynamic translation memory: using statistical machine translation to improve translation memory fuzzy matches. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 454– 465. Michael Bloodgood and Benjamin Strauss. 2015. Translation memory retrieval methods. Computing Research Repository, arXiv:1505.05841. Bram Bult´e, Tom Vanallemeersch, and Vincent Vandeghinste. 2018. M3TRA: integrating TM and MT for professional translators. In Proceedings of the 21st Annual Conference of the European Association for Machine Translation, pages 69–78. Patrick Cadwell, Sharon O’Brien, and Carlos S. C. Teixeira. 2018. Resistance and accommodation: factors for the (non-) adoption of machine translation among professional translators. Perspectives, 26(3):301–321. Qian Cao and Deyi Xiong. 2018. Encoding gated translation memory into neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3042–3047. Raj Dabre, Fabien Cromieres, and Sadao Kurohashi. 2017. Enabling multi-source neural machine translation by concatenating source sentences in multiple languages. Computing Research Repository, arXiv:1702.06135. Marcello Federico, Alessandro Cattelan, and Marco Trombetti. 2012. Measuring user productivity in machine translation enhanced Computer Assisted Translation. In Proceedings of the 2012 Conference of the Association for Machine Translation in the Americas, pages 44–56. Yang Feng, Shiyue Zhang, Andi Zhang, Dong Wang, and Andrew Abel. 2017. Memory-augmented neural machine translation. Computing Research Repository, arXiv:1708.02005. Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor O. K. Li. 2018. Search engine guided neural machine translation. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pages 5133– 5140. 1809 Sanjika Hewavitharana, Stephan Vogel, and Alex Waibel. 2005. Augmenting a statistical translation system with a translation memory. In Proceedings of the 10th Annual Conference of the European Association for Machine Translation, pages 126–132. Chris Hokamp. 2017. Ensembling factored neural machine translation models for automatic post-editing and quality estimation. Computing Research Repository, arXiv:1706.05083. Chris Hokamp and Qun Liu. 2017. Lexically constrained decoding for sequence generation using grid beam search. Computing Research Repository, arXiv:1704.07138. Heikki Hyyr¨o. 2001. Explaining and extending the bitparallel approximate string matching algorithm of Myers. Technical report, Dept. of Computer and Information Sciences, University of Tampere. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. Computing Research Repository, arXiv:1701.02810. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Philipp Koehn. 2009. Statistical Machine Translation. Cambridge University Press. Philipp Koehn, Alexandra Birch, and Ralf Steinberger. 2009. 462 machine translation systems for Europe. Proceedings of MT Summit XII, pages 65–72. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180. Philipp Koehn and Jean Senellart. 2010a. Convergence of translation memory and statistical machine translation. In Proceedings of AMTA Workshop on MT Research and the Translation Industry, pages 21–31. Philipp Koehn and Jean Senellart. 2010b. Fast approximate string matching with suffix arrays and a* parsing. In Proceedings of the ninth Conference of the Association for Machine Translation in the Americas. Alon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 228–231. Vladimir Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10(8):707–710. Gonzalo Navarro. 2001. A guided tour to approximate string matching. ACM computing surveys (CSUR), 33(1):31–88. John Ortega, Felipe S´anchez-Martınez, and Mikel Forcada. 2016. Fuzzy-match repair using black-box machine translation systems: what can be expected? In Proceedings of the 2016 Conference of the Association for Machine Translation in the Americas, pages 27–39. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Caroline Rossi and Jean-Pierre Chevrot. 2019. Uses and perceptions of machine translation at the European Commission. Journal of Specialised Translation, 31:177–200. Michel Simard and Pierre Isabelle. 2009. Phrase-based machine translation in a computer-assisted translation environment. In Proceedings of MT Summit XII, pages 120–127. Michel Simard and Philippe Langlais. 2001. Subsentential exploitation of translation memories. In Machine Translation Summit 8, pages 335–339. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 2006 Conference of the Association for Machine Translation in the Americas, pages 223–231. Ralf Steinberger, Andreas Eisele, Szymon Klocek, Spyridon Pilos, and Patrick Schl¨uter. 2013. DGTTM: A freely available translation memory in 22 languages. Computing Research Repository, arXiv:1309.5226. Tom Vanallemeersch and Vincent Vandeghinste. 2015. Assessing linguistically aware fuzzy matching in translation memories. In Proceedings of the 18th Annual Conference of the European Association for Machine Translation, pages 153–160. Kun Wang, Chengqing Zong, and Keh-Yih Su. 2013. Integrating translation memory into phrase-based machine translation during decoding. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long papers), pages 11–21. Jingyi Zhang, Masao Utiyama, Eiichro Sumita, Graham Neubig, and Satoshi Nakamura. 2018. Guiding neural machine translation with retrieved translation pieces. Computing Research Repository, arXiv:1804.02559.
2019
175
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1810–1822 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1810 Learning Deep Transformer Models for Machine Translation Qiang Wang1, Bei Li1, Tong Xiao1,2∗, Jingbo Zhu1,2, Changliang Li3, Derek F. Wong4, Lidia S. Chao4 1NLP Lab, Northeastern University, Shenyang, China 2NiuTrans Co., Ltd., Shenyang, China 3Kingsoft AI Lab, Beijing, China 4NLP2CT Lab, University of Macau, Macau, China [email protected], libei [email protected], {xiaotong,zhujingbo}@mail.neu.edu.com, [email protected], {derekfw,lidiasc}@um.edu.mo Abstract Transformer is the state-of-the-art model in recent machine translation evaluations. Two strands of research are promising to improve models of this kind: the first uses wide networks (a.k.a. Transformer-Big) and has been the de facto standard for the development of the Transformer system, and the other uses deeper language representation but faces the difficulty arising from learning deep networks. Here, we continue the line of research on the latter. We claim that a truly deep Transformer model can surpass the Transformer-Big counterpart by 1) proper use of layer normalization and 2) a novel way of passing the combination of previous layers to the next. On WMT’16 EnglishGerman, NIST OpenMT’12 Chinese-English and larger WMT’18 Chinese-English tasks, our deep system (30/25-layer encoder) outperforms the shallow Transformer-Big/Base baseline (6-layer encoder) by 0.4∼2.4 BLEU points. As another bonus, the deep model is 1.6X smaller in size and 3X faster in training than Transformer-Big1. 1 Introduction Neural machine translation (NMT) models have advanced the previous state-of-the-art by learning mappings between sequences via neural networks and attention mechanisms (Sutskever et al., 2014; Bahdanau et al., 2015). The earliest of these read and generate word sequences using a series of recurrent neural network (RNN) units, and the improvement continues when 4-8 layers are stacked for a deeper model (Luong et al., 2015; Wu et al., 2016). More recently, the system based on multi-layer self-attention (call it Transformer) has shown strong results on several large∗Corresponding author. 1The source code is available at https://github. com/wangqiangneu/dlcl scale tasks (Vaswani et al., 2017). In particular, approaches of this kind benefit greatly from a wide network with more hidden states (a.k.a. Transformer-Big), whereas simply deepening the network has not been found to outperform the “shallow” counterpart (Bapna et al., 2018). Do deep models help Transformer? It is still an open question for the discipline. For vanilla Transformer, learning deeper networks is not easy because there is already a relatively deep model in use2. It is well known that such deep networks are difficult to optimize due to the gradient vanishing/exploding problem (Pascanu et al., 2013; Bapna et al., 2018). We note that, despite the significant development effort, simply stacking more layers cannot benefit the system and leads to a disaster of training in some of our experiments. A promising attempt to address this issue is Bapna et al. (2018)’s work. They trained a 16layer Transformer encoder by using an enhanced attention model. In this work, we continue the line of research and go towards a much deeper encoder for Transformer. We choose encoders to study because they have a greater impact on performance than decoders and require less computational cost (Domhan, 2018). Our contributions are threefold: • We show that the proper use of layer normalization is the key to learning deep encoders. The deep network of the encoder can be optimized smoothly by relocating the layer normalization unit. While the location of layer normalization has been discussed in recent systems (Vaswani et al., 2018; Domhan, 2018; Klein et al., 2017), as far as we know, its impact has not been studied in deep Trans2For example, a standard Transformer encoder has 6 layers. Each of them consists of two sub-layers. More sub-layers are involved on the decoder side. 1811 xl F L LN xl+1 yl (a) post-norm residual unit xl LN F L xl+1 yl (b) pre-norm residual unit Figure 1: Examples of pre-norm residual unit and postnorm residual unit. F = sub-layer, and LN = layer normalization. former. • Inspired by the linear multi-step method in numerical analysis (Ascher and Petzold, 1998), we propose an approach based on dynamic linear combination of layers (DLCL) to memorizing the features extracted from all preceding layers. This overcomes the problem with the standard residual network where a residual connection just relies on the output of one-layer ahead and may forget the earlier layers. • We successfully train a 30-layer encoder, far surpassing the deepest encoder reported so far (Bapna et al., 2018). To our best knowledge, this is the deepest encoder used in NMT. On WMT’16 English-German, NIST OpenMT’12 Chinese-English, and larger WMT’18 Chinese-English translation tasks, we show that our deep system (30/25-layer encoder) yields a BLEU improvement of 1.3∼2.4 points over the base model (Transformer-Base with 6 layers). It even outperforms TransformerBig by 0.4∼0.6 BLEU points, but requires 1.6X fewer model parameters and 3X less training time. More interestingly, our deep model is 10% faster than Transformer-Big in inference speed. 2 Post-Norm and Pre-Norm Transformer The Transformer system and its variants follow the standard encoder-decoder paradigm. On the encoder side, there are a number of identical stacked layers. Each of them is composed of a selfattention sub-layer and a feed-forward sub-layer. The attention model used in Transformer is multihead attention, and its output is fed into a fully connected feed-forward network. Likewise, the decoder has another stack of identical layers. It has an encoder-decoder attention sub-layer in addition to the two sub-layers used in each encoder layer. In general, because the encoder and the decoder share a similar architecture, we can use the same method to improve them. In the section, we discuss a more general case, not limited to the encoder or the decoder. 2.1 Model Layout For Transformer, it is not easy to train stacked layers on neither the encoder-side nor the decoderside. Stacking all these sub-layers prevents the efficient information flow through the network, and probably leads to the failure of training. Residual connections and layer normalization are adopted for a solution. Let F be a sub-layer in encoder or decoder, and θl be the parameters of the sub-layer. A residual unit is defined to be (He et al., 2016b): xl+1 = f(yl) (1) yl = xl + F(xl; θl) (2) where xl and xl+1 are the input and output of the l-th sub-layer, and yl is the intermediate output followed by the post-processing function f(·). In this way, xl is explicitly exposed to yl (see Eq. (2)). Moreover, layer normalization is adopted to reduce the variance of sub-layer output because hidden state dynamics occasionally causes a much longer training time for convergence. There are two ways to incorporate layer normalization into the residual network. • Post-Norm. In early versions of Transformer (Vaswani et al., 2017), layer normalization is placed after the element-wise residual addition (see Figure 1(a)), like this: xl+1 = LN(xl + F(xl; θl)) (3) where LN(·) is the layer normalization function, whose parameter is dropped for simplicity. It can be seen as a post-processing step of the output (i.e., f(x) = LN(x)). • Pre-Norm. In recent implementations (Klein et al., 2017; Vaswani et al., 2018; Domhan, 2018), layer normalization is applied to the input of every sub-layer (see Figure 1(b)): xl+1 = xl + F(LN(xl); θl) (4) 1812 Eq. (4) regards layer normalization as a part of the sub-layer, and does nothing for postprocessing of the residual connection (i.e., f(x) = x).3 Both of these methods are good choices for implementation of Transformer. In our experiments, they show comparable performance in BLEU for a system based on a 6-layer encoder (Section 5.1). 2.2 On the Importance of Pre-Norm for Deep Residual Network The situation is quite different when we switch to deeper models. More specifically, we find that prenorm is more efficient for training than post-norm if the model goes deeper. This can be explained by seeing back-propagation which is the core process to obtain gradients for parameter update. Here we take a stack of L sub-layers as an example. Let E be the loss used to measure how many errors occur in system prediction, and xL be the output of the topmost sub-layer. For post-norm Transformer, given a sub-layer l, the differential of E with respect to xl can be computed by the chain rule, and we have ∂E ∂xl = ∂E ∂xL × L−1 Y k=l ∂LN(yk) ∂yk × L−1 Y k=l  1 + ∂F(xk; θk) ∂xk  (5) where QL−1 k=l ∂LN(yk) ∂yk means the backward pass of the layer normalization, and QL−1 k=l (1+ ∂F(xk;θk) ∂xk ) means the backward pass of the sub-layer with the residual connection. Likewise, we have the gradient for pre-norm 4: ∂E ∂xl = ∂E ∂xL ×  1 + L−1 X k=l ∂F(LN(xk); θk) ∂xl  (6) Obviously, Eq. (6) establishes a direct way to pass error gradient ∂E ∂xL from top to bottom. Its merit lies in that the number of product items on the right side does not depend on the depth of the stack. In contrast, Eq. (5) is inefficient for passing gradients back because the residual connection is not 3We need to add an additional function of layer normalization to the top layer to prevent the excessively increased value caused by the sum of unnormalized output. 4For a detailed derivation, we refer the reader to Appendix A. a bypass of the layer normalization unit (see Figure 1(a)). Instead, gradients have to be passed through LN(·) of each sub-layer. It in turn introduces term QL−1 k=l ∂LN(yk) ∂yk into the right hand side of Eq. (5), and poses a higher risk of gradient vanishing or exploring if L goes larger. This was confirmed by our experiments in which we successfully trained a pre-norm Transformer system with a 20-layer encoder on the WMT English-German task, whereas the post-norm Transformer system failed to train for a deeper encoder (Section 5.1). 3 Dynamic Linear Combination of Layers The residual network is the most common approach to learning deep networks, and plays an important role in Transformer. In principle, residual networks can be seen as instances of the ordinary differential equation (ODE), behaving like the forward Euler discretization with an initial value (Chang et al., 2018; Chen et al., 2018b). Euler’s method is probably the most popular firstorder solution to ODE. But it is not yet accurate enough. A possible reason is that only one previous step is used to predict the current value 5(Butcher, 2003). In MT, the single-step property of the residual network makes the model “forget” distant layers (Wang et al., 2018b). As a result, there is no easy access to features extracted from lower-level layers if the model is very deep. Here, we describe a model which makes direct links with all previous layers and offers efficient access to lower-level representations in a deep stack. We call it dynamic linear combination of layers (DLCL). The design is inspired by the linear multi-step method (LMM) in numerical ODE (Ascher and Petzold, 1998). Unlike Euler’s method, LMM can effectively reuse the information in the previous steps by linear combination to achieve a higher order. Let {y0, ..., yl} be the output of layers 0 ∼l. The input of layer l + 1 is defined to be xl+1 = G(y0, . . . , yl) (7) where G(·) is a linear function that merges previously generated values {y0, ..., yl} into a new value. For pre-norm Transformer, we define G(·) 5Some of the other single-step methods, e.g. the RungeKutta method, can obtain a higher order by taking several intermediate steps (Butcher, 2003). Higher order generally means more accurate. 1813 1 0 1 0 0 1 0 0 0 1 x1 x2 x3 x4 y0 y1 y2 y3 (a) 1 1 1 1 1 1 1 1 1 1 x1 x2 x3 x4 y0 y1 y2 y3 (b) 1 0 1 0 0 1 .1 .3 .2 .4 x1 x2 x3 x4 y0 y1 y2 y3 (c) 1.8 .4 1.2 .3 .2 .8 .1 .3 .5 .7 x1 x2 x3 x4 y0 y1 y2 y3 (d) Figure 2: Connection weights for 3-layer encoder: (a) residual connection (He et al., 2016a), (b) dense residual connection (Britz et al., 2017; Dou et al., 2018), (c) multi-layer representation fusion (Wang et al., 2018b)/transparent attention (Bapna et al., 2018) and (d) our approach. y0 denotes the input embedding. Red denotes the weights are learned by model. to be G(y0, . . . , yl) = l X k=0 W (l+1) k LN(yk) (8) where W l+1 k ∈R is a learnable scalar and weights each incoming layer in a linear manner. Eq. (8) provides a way to learn preference of layers in different levels of the stack. Even for the same incoming layer, its contribution to succeeding layers could be different (e.g. W i k ̸= W k k ) . Also, the method is applicable to the post-norm Transformer model. For post-norm, G(·) can be redefined as: G(y0, . . . , yl) = LN l X k=0 W (l+1) k yk  (9) Comparison to LMM. DLCL differs from LMM in two aspects, though their fundamental model is the same. First, DLCL learns weights in an endto-end fashion rather than assigning their values deterministically, e.g. by polynomial interpolation. This offers a more flexible way to control the model behavior. Second, DLCL has an arbitrary size of the past history window, while LMM generally takes a limited history into account (L´oczi, 2018). Also, recent work shows successful applications of LMM in computer vision, but only two previous steps are used in their LMM-like system (Lu et al., 2018). Comparison to existing neural methods. Note that DLCL is a very general approach. For example, the standard residual network is a special case of DLCL, where W l+1 l = 1, and W l+1 k = 0 for k < l. Figure (2) compares different methods of connecting a 3-layer network. We see that the densely residual network is a fully-connected network with a uniform weighting schema (Britz et al., 2017; Dou et al., 2018). Multi-layer representation fusion (Wang et al., 2018b) and transparent attention (call it TA) (Bapna et al., 2018) methods can learn a weighted model to fuse layers but they are applied to the topmost layer only. The DLCL model can cover all these methods. It provides ways of weighting and connecting layers in the entire stack. We emphasize that although the idea of weighting the encoder layers by a learnable scalar is similar to TA, there are two key differences: 1) Our method encourages earlier interactions between layers during the encoding process, while the encoder layers in TA are combined until the standard encoding process is over; 2) For an encoder layer, instead of learning a unique weight for each decoder layer like TA, we make a separate weight for each successive encoder layers. In this way, we can create more connections between layers6. 4 Experimental Setup We first evaluated our approach on WMT’16 English-German (En-De) and NIST’12 ChineseEnglish (Zh-En-Small) benchmarks respectively. To make the results more convincing, we also experimented on a larger WMT’18 Chinese-English dataset (Zh-En-Large) with data augmentation by back-translation (Sennrich et al., 2016a). 4.1 Datasets and Evaluation For the En-De task, to compare with Vaswani et al. (2017)’s work, we use the same 4.5M preprocessed data 7, which has been tokenized and 6Let the encoder depth be M and the decoder depth be N (M > N for a deep encoder model). Then TA newly adds O(M × N) connections, which are fewer than ours of O(M 2) 7https://drive.google.com/uc?export= download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8 1814 Model Param. Batch Updates †Times BLEU ∆ (×4096) (×100k) Vaswani et al. (2017) (Base) 65M 1 1 reference 27.3 Bapna et al. (2018)-deep (Base, 16L) 137M 28.0 Vaswani et al. (2017) (Big) 213M 1 3 3x 28.4 Chen et al. (2018a) (Big) 379M 16 †0.075 1.2x 28.5 He et al. (2018) (Big) †210M 1 29.0 Shaw et al. (2018) (Big) †210M 1 3 3x 29.2 Dou et al. (2018) (Big) 356M 1 29.2 Ott et al. (2018) (Big) 210M 14 0.25 3.5x 29.3 post-norm Transformer (Base) 62M 1 1 1x 27.5 reference Transformer (Big) 211M 1 3 3x 28.8 +1.3 Transformer-deep (Base, 20L) 106M 2 0.5 1x failed failed DLCL (Base) 62M 1 1 1x 27.6 +0.1 DLCL-deep (Base, 25L) 121M 2 0.5 1x 29.2 +1.7 pre-norm Transformer (Base) 62M 1 1 1x 27.1 reference Transformer (Big) 211M 1 3 3x 28.7 +1.6 Transformer-deep (Base, 20L) 106M 2 0.5 1x 28.9 +1.8 DLCL (Base) 62M 1 1 1x 27.3 +0.2 DLCL-deep (Base, 30L) 137M 2 0.5 1x 29.3 +2.2 Table 1: BLEU scores [%] on English-German translation. Batch indicates the corresponding batch size if running on 8 GPUs. Times ∝Batch×Updates, which can be used to approximately measure the required training time. † denotes an estimate value. Note that “-deep” represents the best-achieved result as depth changes. jointly byte pair encoded (BPE) (Sennrich et al., 2016b) with 32k merge operations using a shared vocabulary 8. We use newstest2013 for validation and newstest2014 for test. For the Zh-En-Small task, we use parts of the bitext provided within NIST’12 OpenMT9. We choose NIST MT06 as the validation set, and MT04, MT05, MT08 as the test sets. All the sentences are word segmented by the tool provided within NiuTrans (Xiao et al., 2012). We remove the sentences longer than 100 and end up with about 1.9M sentence pairs. Then BPE with 32k operations is used for both sides independently, resulting in a 44k Chinese vocabulary and a 33k English vocabulary respectively. For the Zh-En-Large task, we use exactly the same 16.5M dataset as Wang et al. (2018a), composing of 7.2M-sentence CWMT corpus, 4.2M-sentence UN and News-Commentary combined corpus, and back-translation of 5M-sentence monolingual data from NewsCraw2017. We refer the reader to Wang et al. (2018a) for the details. 8The tokens with frequencies less than 5 are filtered out from the shared vocabulary. 9LDC2000T46, LDC2000T47, LDC2000T50, LDC2003E14, LDC2005T10, LDC2002E18, LDC2007T09, LDC2004T08 For evaluation, we first average the last 5 checkpoints, each of which is saved at the end of an epoch. And then we use beam search with a beam size of 4/6 and length penalty of 0.6/1.0 for EnDe/Zh-En tasks respectively. We measure casesensitive/insensitive tokenized BLEU by multibleu.perl for En-De and Zh-En-Small respectively, while case-sensitive detokenized BLEU is reported by the official evaluation script mtevalv13a.pl for Zh-En-Large. Unless noted otherwise we run each experiment three times with different random seeds and report the mean of the BLEU scores across runs10. 4.2 Model and Hyperparameters All experiments run on fairseq-py11 with 8 NVIDIA Titan V GPUs. For the post-norm Transformer baseline, we replicate the model setup of Vaswani et al. (2017). All models are optimized by Adam (Kingma and Ba, 2014) with β1 = 0.9, β2 = 0.98, and ϵ = 10−8. In training warmup (warmup = 4000 steps), the learning rate linearly increases from 10−7 to lr =7×10−4/5×10−4 for 10Due to resource constraints, all experiments on Zh-EnLarge task only run once. 11https://github.com/pytorch/fairseq 1815 Model (Base, 16L) BLEU post-norm Bapna et al. (2018) 28.0 Transformer failed DLCL 28.4 pre-norm Transformer 28.0 DLCL 28.2 Table 2: Compare with Bapna et al. (2018) on WMT’16 English-German translation under a 16-layer encoder. Transformer-Base/Big respectively, after which it is decayed proportionally to the inverse square root of the current step. Label smoothing εls=0.1 is used as regularization. For the pre-norm Transformer baseline, we follow the setting as suggested in tensor2tensor12. More specifically, the attention dropout Patt = 0.1 and feed-forward dropout Pff = 0.1 are additionally added. And some hyper-parameters for optimization are changed accordingly: β2 = 0.997, warmup = 8000 and lr = 10−3/7×10−4 for Transformer-Base/Big respectively. For both the post-norm and pre-norm baselines, we batch sentence pairs by approximate length and restrict input and output tokens per batch to batch = 4096 per GPU. We set the update steps according to corresponding data sizes. More specifically, the Transformer-Base/Big is updated for 100k/300k steps on the En-De task as Vaswani et al. (2017), 50k/100k steps on the Zh-En-Small task, and 200k/500k steps on the Zh-En-Large task. In our model, we use the dynamic linear combination of layers for both encoder and decoder. For efficient computation, we only combine the output of a complete layer rather than a sub-layer. It should be noted that for deep models (e.g. L ≥ 20), it is hard to handle a full batch in a single GPU due to memory size limitation. We solve this issue by accumulating gradients from two small batches (e.g. batch = 2048) before each update (Ott et al., 2018). In our primitive experiments, we observed that training with larger batches and learning rates worked well for deep models. Therefore all the results of deep models are reported with batch = 8192, lr = 2×10−3 and warmup = 16,000 unless otherwise stated. For fairness, we only use half of the updates of baseline (e.g. update = 50k) to ensure the same amount of data that we actually 12https://github.com/tensorflow/ tensor2tensor see in training. We report the details in Appendix B. 5 Results 5.1 Results on the En-De Task In Table 1, we first report results on WMT En-De where we compare to the existing systems based on self-attention. Obviously, while almost all previous results based on Transformer-Big (marked by Big) have higher BLEU than those based on Transformer-Base (marked by Base), larger parameter size and longer training epochs are required. As for our approach, considering the post-norm case first, we can see that our Transformer baselines are superior to Vaswani et al. (2017) in both Base and Big cases. When increasing the encoder depth, e.g. L = 20, the vanilla Transformer failed to train, which is consistent with Bapna et al. (2018). We attribute it to the vanishing gradient problem based on the observation that the gradient norm in the low layers (e.g. embedding layer) approaches 0. On the contrary, post-norm DLCL solves this issue and achieves the best result when L = 25. The situation changes when switching to prenorm. While it slightly underperforms the postnorm counterpart in shallow networks, pre-norm Transformer benefits more from the increase in encoder depth. More concretely, pre-norm Transformer achieves optimal result when L=20 (see Figure 3(a)), outperforming the 6-layer baseline by 1.8 BLEU points. It indicates that pre-norm is easier to optimize than post-norm in deep networks. Beyond that, we successfully train a 30layer encoder by our method, resulting in a further improvement of 0.4 BLEU points. This is 0.6 BLEU points higher than the pre-norm Transformer-Big. It should be noted that although our best score of 29.3 is the same as Ott et al. (2018), our approach only requires 3.5X fewer training epochs than theirs. To fairly compare with transparent attention (TA) (Bapna et al., 2018), we separately list the results using a 16-layer encoder in Table 2. It can be seen that pre-norm Transformer obtains the same BLEU score as TA without the requirement of complicated attention design. However, DLCL in both post-norm and pre-norm cases outperform TA. It should be worth that TA achieves the best result when encoder depth is 16, while we can fur1816 Model (pre-norm) Param. Valid. MT04 MT05 MT08 Average Transformer (Base) 84M 51.27 54.41 49.43 45.33 49.72 Transformer (Big) 257M 52.30 55.37 52.21 47.40 51.66 Transformer-deep (Base, 25L) 144M 52.50 55.80 51.98 47.26 51.68 DLCL (Base) 84M 51.61 54.91 50.58 46.11 50.53 DLCL-deep (Base, 25L) 144M 53.57 55.91 52.30 48.12 52.11 Table 3: BLEU scores [%] on NIST’12 Chinese-English translation. Model Param. newstest17 newstest18 ∆avg. Wang et al. (2018a) (post-norm, Base) 102.1M 25.9 pre-norm Transformer (Base) 102.1M 25.8 25.9 reference pre-norm Transformer (Big) 292.4M 26.4 27.0 +0.9 pre-norm DLCL-deep (Base, 25L) 161.5M 26.7 27.1 +1.0 pre-norm DLCL-deep (Base, 30L) 177.2M 26.9 27.4 +1.3 Table 4: BLEU scores [%] on WMT’18 Chinese-English translation. Base-6L Big-6L Transformer DLCL 6 1620 25 30 35 26.5 27.0 27.5 28.0 28.5 29.0 29.5 BLEU Score (a) WMT En-De 6 16 20 25 30 49.5 50.0 50.5 51.0 51.5 52.0 52.5 BLEU Score (b) NIST Zh-En Figure 3: BLEU scores [%] against the encoder depth for pre-norm Transformer and pre-norm DLCL on English-German and Chinese-English tasks. ther improve performance by training deeper encoders. 5.2 Results on the Zh-En-Small Task Seen from the En-De task, pre-norm is more effective than the post-norm counterpart in deep networks. Therefore we evaluate our method in the case of pre-norm on the Zh-En task. As shown in Table 3, firstly DLCL is superior to the baseline when the network’s depth is shallow. Interestingly, both Transformer and DLCL achieve the best results when we use a 25-layer encoder. The 25layer Transformer can approach the performance of Transformer-Big, while our deep model outperforms it by about 0.5 BLEU points under the equivalent parameter size. It confirms that our approach is a good alternative to Transformer no matter how deep it is. 5.3 Results on the Zh-En-Large Task While deep Transformer models, in particular the deep pre-norm DLCL, show better results 6 16 20 25 30 1,800 2,000 2,200 2,400 2,600 Speed Base-6L Big-6L DLCL Figure 4: GPU generation speed (target tokens/sec.) against the depth of encoder for pre-norm DLCL on English-German task (batch size = 32, beam size = 4). than Transformer-Big on En-De and Zh-En-Small tasks, both data sets are relatively small, and the improved performance over Transformer-Big might be partially due to over-fitting in the wider model. For a more challenging task , we report the results on Zh-En-Large task in Table 4. We can see that the 25-layer pre-norm DLCL slightly surpassed Transformer-Big, and the superiority is bigger when using a 30-layer encoder. This result indicates that the claiming of the deep network defeating Transformer-Big is established and is not affected by the size of the data set. 6 Analysis 6.1 Effect of Encoder Depth In Figure 3, we plot BLEU score as a function of encoder depth for pre-norm Transformer and DLCL on En-De and Zh-En-Small tasks. First of all, both methods benefit from an increase in encoder depth at the beginning. Remarkably, when the encoder depth reaches 20, both of the two deep models can achieve comparable performance to Transformer-Big, and even exceed it when the en1817 coder depth is further increased in DLCL. Note that pre-norm Transformer degenerates earlier and is less robust than DLCL when the depth is beyond 20. However, a deeper network (>30 layers) does not bring more benefits. Worse still, deeper networks consume a lot of memory, making it impossible to train efficiently. We also report the inference speed on GPU in Figure 4. As expected, the speed decreases linearly with the number of encoder layers. Nevertheless, our system with a 30-layer encoder is still faster than Transformer-Big, because the encoding process is independent of beam size, and runs only once. In contrast, the decoder suffers from severe autoregressive problems. 6.2 Effect of Decoder Depth Enc. Depth Dec. Depth BLEU Speed 6 4 27.12 3088.3 6 6 27.33 2589.2 6 8 27.42 2109.6 Table 5: Tokenized BLEU scores [%] and GPU generation speed (target tokens per second) in pre-norm Transformer (Base) on the test set of WMT EnglishGerman (batch size = 32, beam size = 4). Table 5 shows the effects of decoder depth on BLEU and inference speed on GPU. Different from encoder, increasing the depth of decoder only yields a slight BLEU improvement, but the cost is high: for every two layers added, the translation speed drops by approximate 500 tokens evenly. It indicates that exploring deep encoders may be more promising than deep decoders for NMT. 6.3 Ablation Study We report the ablation study results in Table 6. We first observe a modest decrease when removing the introduced layer normalization in Eq. (8). Then we try two methods to replace learnable weights with constant weights: All-One (W i j = 1) and Average (W i j = 1/(i+1)). We can see that these two methods consistently hurt performance, in particular in the case of All-One. It indicates that making the weights learnable is important for our model. Moreover, removing the added layer normalization in the Average model makes BLEU score drop by 0.28, which suggests that adding layer normalization helps more if we use the constant weights. In addition, we did two interesting experiments on big models. The first one is to replace the base enModel BLEU pre-norm DLCL-20L 28.80 - layer norm. 28.67 - learnable weight (fix 1) 28.22 - learnable weight (fix 1/N) 28.51 - layer norm. 28.23 pre-norm Transformer-Base 27.11 + big encoder 27.59 pre-norm Transformer-Big 28.72 + 12-layer encoder (DLCL) 29.17 Table 6: Ablation results by tokenized BLEU [%] on the test set of WMT English-German translation. coder with a big encoder in pre-norm TransformerBase. The other one is to use DLCL to train a deep-and-wide Transformer (12 layers). Although both of them benefit from the increased network capacity, the gain is less than the “thin” counterpart in terms of BLEU, parameter size, and training efficiency. 6.4 Visualization on Learned Weights We visually present the learned weights matrices of the 30-layer encoder (Figure 5(a)) and its 6-layer decoder (Figure 5(b)) in our pre-norm DLCL-30L model on En-De task. For a clearer contrast, we mask out the points with an absolute value of less than 0.1 or 5% of the maximum per row. We can see that the connections in the early layers are dense, but become sparse as the depth increases. It indicates that making full use of earlier layers is necessary due to insufficient information at the beginning of the network. Also, we find that most of the large weight values concentrate on the right of the matrix, which indicates that the impact of the incoming layer is usually related to the distance between the outgoing layer. Moreover, for a fixed layer’s output yi, it is obvious that its contribution to successive layers changes dynamically (one column). To be clear, we extract the weights of y10 in Figure 5(c). In contrast, in most previous paradigms of dense residual connection, the output of each layer remains fixed for subsequent layers. 7 Related Work Deep Models. Deep models have been explored in the context of neural machine translation since the emergence of RNN-based models. To ease optimization, researchers tried to reduce the number of non-linear transitions (Zhou et al., 1818 y0 y5 y10 y15 y20 y25 y30 x31 x26 x21 x16 x11 x6 x1 −4 −2 0 2 4 y0 y1 y2 y3 y4 y5 y6 x7 x6 x5 x4 x3 x2 x1 −2 0 2 4 6 8 (b) 6-layer decoder of DLCL 4.1 3.3 3.2 1.7 2.3 1.1 0.0 0.0 0.1 0.8 0.5 x11 ∼x21 0.2 0.5 0.0 0.5 0.2 0.0 0.0 0.1 0.2 0.0 x22 ∼x31 (a) 30-layer encoder of DLCL (c) Weight distribution of y10 in the encoder Figure 5: A visualization example of learned weights in our 30-layer pre-norm DLCL model. 2016; Wang et al., 2017). But these attempts are limited to the RNN architecture and may not be straightforwardly applicable to the current Transformer model. Perhaps, the most relevant work to what is doing here is Bapna et al. (2018)’s work. They pointed out that vanilla Transformer was hard to train if the depth of the encoder was beyond 12. They successfully trained a 16-layer Transformer encoder by attending the combination of all encoder layers to the decoder. In their approach, the encoder layers are combined just after the encoding is completed, but not during the encoding process. In contrast, our approach allows the encoder layers to interact earlier, which has been proven to be effective in machine translation (He et al., 2018) and text match (Lu and Li, 2013). In addition to machine translation, deep Transformer encoders are also used for language modeling (Devlin et al., 2018; Al-Rfou et al., 2018). For example, Al-Rfou et al. (2018) trained a character language model with a 64layer Transformer encoder by resorting to auxiliary losses in intermediate layers. This method is orthogonal to our DLCL method, though it is used for language modeling, which is not a very heavy task. Densely Residual Connections. Densely residual connections are not new in NMT. They have been studied for different architectures, e.g., RNN (Britz et al., 2017) and Transformer (Dou et al., 2018). Some of the previous studies fix the weight of each layer to a constant, while others learn a weight distribution by using either the self-attention model (Wang et al., 2018b) or a softmax-normalized learnable vector (Peters et al., 2018). They focus more on learning connections from lower-level layers to the topmost layer. Instead, we introduce additional connectivity into the network and learn more densely connections for each layer in an end-to-end fashion. 8 Conclusion We have studied deep encoders in Transformer. We have shown that the deep Transformer models can be easily optimized by proper use of layer normalization, and have explained the reason behind it. Moreover, we proposed an approach based on a dynamic linear combination of layers and successfully trained a 30-layer Transformer system. It is the deepest encoder used in NMT so far. Experimental results show that our thin-but-deep encoder can match or surpass the performance of Transformer-Big. Also, its model size is 1.6X smaller. In addition, it requires 3X fewer training epochs and is 10% faster for inference. Acknowledgements This work was supported in part by the National Natural Science Foundation of China (Grant Nos. 61876035, 61732005, 61432013 and 61672555), the Fundamental Research Funds for the Central Universities (Grant No. N181602013), the Joint Project of FDCT-NSFC (Grant No. 045/2017/AFJ), the MYRG from the University of Macau (Grant No. MYRG2017-00087-FST). References Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. 2018. Character-level lan1819 guage modeling with deeper self-attention. arXiv preprint arXiv:1808.04444. Uri M Ascher and Linda R Petzold. 1998. Computer methods for ordinary differential equations and differential-algebraic equations, volume 61. Siam. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In In Proceedings of the 3rd International Conference on Learning Representations. Ankur Bapna, Mia Chen, Orhan Firat, Yuan Cao, and Yonghui Wu. 2018. Training deeper neural machine translation models with transparent attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3028–3033. Denny Britz, Anna Goldie, Thang Luong, and Quoc Le. 2017. Massive exploration of neural machine translation architectures. arXiv preprint arXiv:1703.03906. J C Butcher. 2003. Numerical Methods for Ordinary Differential Equations. John Wiley & Sons, New York, NY. Bo Chang, Lili Meng, Eldad Haber, Frederick Tung, and David Begert. 2018. Multi-level residual networks from dynamical systems view. In International Conference on Learning Representations. Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Niki Parmar, Mike Schuster, Zhifeng Chen, et al. 2018a. The best of both worlds: Combining recent advances in neural machine translation. arXiv preprint arXiv:1804.09849. Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. 2018b. Neural ordinary differential equations. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 6572–6583. Curran Associates, Inc. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Tobias Domhan. 2018. How much attention do you need? a granular analysis of neural machine translation architectures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1799–1808. Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Shuming Shi, and Tong Zhang. 2018. Exploiting deep representations for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4253–4262. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016a. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016b. Identity mappings in deep residual networks. In European Conference on Computer Vision, pages 630–645. Springer. Tianyu He, Xu Tan, Yingce Xia, Di He, Tao Qin, Zhibo Chen, and Tie-Yan Liu. 2018. Layer-wise coordination between encoder and decoder for neural machine translation. In Advances in Neural Information Processing Systems, pages 7955–7965. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. Proceedings of ACL 2017, System Demonstrations, pages 67–72. Lajos L´oczi. 2018. Exact optimal values of stepsize coefficients for boundedness of linear multistep methods. Numerical Algorithms, 77(4):1093–1116. Yiping Lu, Aoxiao Zhong, Quanzheng Li, and Bin Dong. 2018. Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3282–3291, Stockholmsmssan, Stockholm Sweden. PMLR. Zhengdong Lu and Hang Li. 2013. A deep architecture for matching short texts. In Advances in Neural Information Processing Systems, pages 1367–1375. Thang Luong, Hieu Pham, and D. Christopher Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Association for Computational Linguistics. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In WMT, pages 1–9. Association for Computational Linguistics. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pages 1310–1318. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association 1820 for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 86–96. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 464–468. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, et al. 2018. Tensor2tensor for neural machine translation. Vol. 1: MT Researchers Track, page 193. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Mingxuan Wang, Zhengdong Lu, Jie Zhou, and Qun Liu. 2017. Deep neural machine translation with linear associative unit. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 136–145. Qiang Wang, Bei Li, Jiqiang Liu, Bojian Jiang, Zheyang Zhang, Yinqiao Li, Ye Lin, Tong Xiao, and Jingbo Zhu. 2018a. The niutrans machine translation system for wmt18. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 528–534. Qiang Wang, Fuxue Li, Tong Xiao, Yanyang Li, Yinqiao Li, and Jingbo Zhu. 2018b. Multi-layer representation fusion for neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3015–3026. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Tong Xiao, Jingbo Zhu, Hao Zhang, and Qiang Li. 2012. Niutrans: an open source toolkit for phrasebased and syntax-based machine translation. In Proceedings of the ACL 2012 System Demonstrations, pages 19–24. Association for Computational Linguistics. Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep recurrent models with fast-forward connections for neural machine translation. Transactions of the Association of Computational Linguistics, 4(1):371–383. A Derivations of Post-Norm Transformer and Pre-Norm Transformer A general residual unit can be expressed by: yl = xl + F(xl; θl), (10) xl+1 = f(yl), (11) where xl and xl+1 are the input and output of the l-th sub-layer, and yl is the intermediate output followed by the post-processing function f(·). We have known that the post-norm Transformer incorporates layer normalization (LN(·)) by: xl+1 = LN xl + F(xl; θl)  = LN xl + Fpost(xl; θl)  (12) where Fpost(·) = F(·). Note that we omit the parameter in LN for clarity. Similarly, the pre-norm Transformer can be described by: xl+1 = xl + F LN(xl); θl  = xl + Fpre(xl; θl) (13) where Fpre(·) = F(LN(·)). In this way, we can see that both post-norm and pre-norm are special cases of the general residual unit. Specifically, the post-norm Transformer is the special case when: fpost(x) = LN(x), (14) while for pre-norm Transformer, it is: fpre(x) = x. (15) Here we take a stack of L sub-layers as an example. Let E be the loss used to measure how 1821 many errors occur in system prediction, and xL be the output of the top-most sub-layer. Then from the chain rule of back propagation we obtain: ∂E ∂xl = ∂E ∂xL ∂xL ∂xl (16) To analyze it, we can directly decompose ∂xL ∂xl layer by layer: ∂xL ∂xl = ∂xL ∂xL−1 ∂xL−1 ∂xL−2 . . . ∂xl+1 ∂xl . (17) Consider two adjacent layers as Eq.10 and Eq. 11, we have: ∂xl+1 ∂xl = ∂xl+1 ∂yl ∂yl ∂xl = ∂f(yl) ∂yl  1 + ∂F(xl; θl) ∂xl  (18) For post-norm Transformer, it is easy to know ∂fpost(yl) ∂yl = ∂LN(yl) ∂yl according to Eq.(14). Then put Eq.(17) and (18) into Eq.(16) and we can obtain the differential L w.r.t. xl: ∂E ∂xl = ∂E ∂xL × L−1 Y k=l ∂LN(yk) ∂yk × L−1 Y k=l  1 + ∂F(xk; θk) ∂xk  (19) Eq.(19) indicates that the number of product terms grows linearly with L, resulting in prone to gradient vanishing or explosion. However, for pre-norm Transformer, instead of decomposing the gradient layer by layer in Eq. (17), we can use the good nature that xL = xl + PL−1 k=l Fpre(xk; θk) by recursively using Eq. (13): xL = xL−1 + Fpre(xL−1; θL−1) = xL−2 + Fpre(xL−2; θL−2) + Fpre(xL−1; θL−1) · · · = xl + L−1 X k=l Fpre(xk; θk) (20) In this way, we can simplify Eq.(17) as: ∂xL ∂xl = 1 + L−1 X k=l ∂Fpre(xk; θk) ∂xl (21) Due to ∂fpre(yl) ∂yl = 1, we can put Eq. (21) into Eq. (16) and obtain: ∂E ∂xl = ∂E ∂xL ×  1 + L−1 X k=l ∂Fpre(xk; θk) ∂xl  = ∂E ∂xL ×  1 + L−1 X k=l ∂F(LN(xk); θk) ∂xl  (22) B Training Hyper-parameters for Deep Models Model Batch Upd. Lr Wu. PPL post 4096 100k 7e−4 4k 4.85 post 8192 50k 2e−3 16k * post-20L 4096 100k 7e−4 4k * post-20L 8192 50k 2e−3 16k * pre 4096 100k 1e−3 8k 4.88 pre 8192 50k 2e−3 16k 4.86 pre-20L 4096 100k 1e−3 8k 4.68 pre-20L 8192 50k 2e−3 16k 4.60 Table 7: Hyper-parameter selection for shallow and deep models based on perplexity on validation set for English-German translation. “post-20L” is short for post-norm Transformer with a 20-layer encoder. Similarly, “pre-20L” denotes the pre-norm Transformer case. * indicates that the model failed to train. We select hyper-parameters by measuring perplexity on the validation set of WMT En-De task. We compare the effects of hyper-parameters in both shallow networks (6 layers) and deep networks (20 layers). We use the standard hyperparameters for both models as the baselines. More concretely, for post-norm TransformerBase, we set batch/update/lr/warmup to 4096/100k/7×10−4/4k as the original Transformer, while for pre-norm Transformer-Base, the configuration is 4096/100k/10−3/8k as suggested in tensor2tensor. As for deep models, we uniformly use the setting of 8192/50k/2×10−3/16k. Note that while we use a 2X larger batch size for deep models, we reduce a half of the number of updates. In this way, the amount of seen training data keeps the same in all experiments. A larger learning rate is used to speed up convergence when we use large batch. In addition, we found simultaneously increasing the learning rate and warmup steps worked best. Table 7 report the results. First of all, we can see that post-norm Transformer failed to train when 1822 the network goes deeper. Worse still, the shallow network also failed to converge when switching to the setting of deep networks. We attribute it to post-norm Transformer being more sensitive to the large learning rate. On the contrary, in the case of either a 6-layer encoder or a 20-layer encoder, the pre-norm Transformer benefits from the larger batch and learning rate. However, the gain under deep networks is larger than that under shallow networks.
2019
176
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1823–1827 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1823 Generating Diverse Translations with Sentence Codes Raphael Shu The University of Tokyo [email protected] Hideki Nakayama The University of Tokyo [email protected] Kyunghyun Cho New York University CIFAR Azrieli Global Scholar [email protected] Abstract Users of machine translation systems may desire to obtain multiple candidates translated in different ways. In this work, we attempt to obtain diverse translations by using sentence codes to condition the sentence generation. We describe two methods to extract the codes, either with or without the help of syntax information. For diverse generation, we sample multiple candidates, each of which conditioned on a unique code. Experiments show that the sampled translations have much higher diversity scores when using reasonable sentence codes, where the translation quality is still on par with the baselines even under strong constraint imposed by the codes. In qualitative analysis, we show that our method is able to generate paraphrase translations with drastically different structures. The proposed approach can be easily adopted to existing translation systems as no modification to the model is required. 1 Introduction When using machine translation systems, users may desire to see different candidate translations other than the best one. In this scenario, users usually expect the system to show candidates with different sentence structures. To obtain diverse translations, conventional neural machine translation (NMT) models allow one to sample translations using the beam search algorithm, however, they usually share similar sentence structures. Recently, various methods (Li et al., 2016; Xu et al., 2018) are proposed for diverse generation. These methods encourage the model to use creative vocabulary to achieve high diversity. Although producing creative words benefits tasks in the dialog domain, when applied to machine translation, it can hurt the translation quality by changing the original meaning. In this work, we are interested in generating multiple valid translations with high diversity. To achieve this, we propose to construct the codes based on semantics-level or syntax-level information of target-side sentences. To generate diverse translations, we constrain the generation model by specifying a particular code as a semantic or syntactic assignment. More concretely, we prefix the target-side sentences with the codes. Then, an NMT model is trained with the original source sentences and the prefixed target sentences. As the model generates tokens in left-to-right order, the probability of emitting each word is predicted conditioned on the assigned code. As each assignment is supposed to correspond to a sentence structure, the candidate translations sampled with different assignments are expected to have high diversity. We can think such model as a mixture-of-expert translation model where each expert is capable of producing translations with a certain style indicated by the code. In the inference time, code assignments are given to the model so that a selection of experts are picked to generate translations. The key question is how to extract such sentence codes. Here, we explore two approaches. First, a simple unsupervised method is tested, which clusters the sentence embeddings and use the cluster ids as the code assignments. Next, to capture only the structural variation of sentences, we turn to syntax. We encode the structure of constituent parse trees into discrete codes with a tree autoencoder. Experiments on two machine translation datasets show that a set of highly diverse translations can be obtained with reasonable mechanism for extracting the sentence codes, while the sampled candidates still have BLEU scores on par with the baselines. 1824 2 Proposed Approach 2.1 Extracting Sentence Codes Our approach produces diverse translations by conditioning sentence generation with the sentence codes. Ideally, we would like the codes to capture the information about the sentence structures rather than utterances. To extract such codes from target sentences, we explore two methods. Semantic Coding Model The first method extracts sentence codes from unsupervisedly learned semantic information. We cluster the sentence embeddings produced by pre-trained models into a fixed number of clusters, then use the cluster ids as discrete priors to condition the sentence generation. In this work, we test two semantic coding models. The first model is based on BERT (Devlin et al., 2018), where the vectors corresponding to the “[CLS]” token are clustered. The second model produces sentence embeddings by averaging FastText word embeddings (Bojanowski et al., 2017). Comparing to the hidden states of BERT, word embeddings are expected to contain less syntactic information as the word order is ignored during training. Syntactic Coding Model To explicitly capture the syntactic diversity, we also consider to derive the sentence codes from the parse trees produced by a constituency parser. As the utterance-level information is not desired, the terminal nodes are removed from the parse trees. To obtain the sentence codes, we use a TreeLSTM-based auto-encoder similar to Socher et al. (2011), which encodes the syntactic information into a single discrete code. As illustrated in Fig. 1 (a), a TreeLSTM cell (Tai et al., 2015) computes a recurrent state based on a given input vector and the states of Ni child nodes: hi = fcell(xi, hi1, hi2, ..., hiNi; θ). (1) The tree auto-encoder model is shown in Fig. 1 (c), where the encoder computes a latent tree representation. As the decoder has to unroll the vector representation following a reversed tree structure to predict the non-terminal labels, the standard TreeLSTM equation cannot be directly applied. To compute along the reversed tree, we modify Eq. 1 for computing the hidden state of the j-th child node given the parent-node state hi: hij = fdec(hi; θj), (2) self-attention feed-forward Nx (b) source encoder (c) tree auto-encoder mean pool discrete code + + NP VP NP VBD S VP VBD NP NP S the cat bit him (a) TreeLSTM cell Label hi1 <latexit sha1_base64="5Vvyf+Bf+wOqV4BOkHkduYgiz8=">AB7XicbVDLSgNBE OyNrxhfUY9eBoPgKeyKoMegF48RzAOSJcxOsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqhg2mhNLtiBoUXGLDciuwn WikcSwFY1vZ37rCbXhSj7YSYJhTIeSDzij1knNUS/jwbRXrvhVfw6ySoKcVCBHvVf+6vYVS2OUlglqTCfwExtmVFvOBE5L3dRgQtmYDrHjqKQxmjCbXzslZ07pk4HSrqQlc/X3R EZjYyZx5Dpjakdm2ZuJ/3md1A6uw4zLJLUo2WLRIBXEKjJ7nfS5RmbFxBHKNHe3EjaimjLrAiq5EILl1dJ86Ia+NXg/rJSu8njKMIJnMI5BHAFNbiDOjSAwSM8wyu8ecp78d69j 0VrwctnjuEPvM8fmPDw=</latexit> <latexit sha1_base64="5Vvyf+Bf+wOqV4BOkHkduYgiz8=">AB7XicbVDLSgNBE OyNrxhfUY9eBoPgKeyKoMegF48RzAOSJcxOsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqhg2mhNLtiBoUXGLDciuwn WikcSwFY1vZ37rCbXhSj7YSYJhTIeSDzij1knNUS/jwbRXrvhVfw6ySoKcVCBHvVf+6vYVS2OUlglqTCfwExtmVFvOBE5L3dRgQtmYDrHjqKQxmjCbXzslZ07pk4HSrqQlc/X3R EZjYyZx5Dpjakdm2ZuJ/3md1A6uw4zLJLUo2WLRIBXEKjJ7nfS5RmbFxBHKNHe3EjaimjLrAiq5EILl1dJ86Ia+NXg/rJSu8njKMIJnMI5BHAFNbiDOjSAwSM8wyu8ecp78d69j 0VrwctnjuEPvM8fmPDw=</latexit> <latexit sha1_base64="5Vvyf+Bf+wOqV4BOkHkduYgiz8=">AB7XicbVDLSgNBE OyNrxhfUY9eBoPgKeyKoMegF48RzAOSJcxOsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqhg2mhNLtiBoUXGLDciuwn WikcSwFY1vZ37rCbXhSj7YSYJhTIeSDzij1knNUS/jwbRXrvhVfw6ySoKcVCBHvVf+6vYVS2OUlglqTCfwExtmVFvOBE5L3dRgQtmYDrHjqKQxmjCbXzslZ07pk4HSrqQlc/X3R EZjYyZx5Dpjakdm2ZuJ/3md1A6uw4zLJLUo2WLRIBXEKjJ7nfS5RmbFxBHKNHe3EjaimjLrAiq5EILl1dJ86Ia+NXg/rJSu8njKMIJnMI5BHAFNbiDOjSAwSM8wyu8ecp78d69j 0VrwctnjuEPvM8fmPDw=</latexit> <latexit sha1_base64="5Vvyf+Bf+wOqV4BOkHkduYgiz8=">AB7XicbVDLSgNBE OyNrxhfUY9eBoPgKeyKoMegF48RzAOSJcxOsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqhg2mhNLtiBoUXGLDciuwn WikcSwFY1vZ37rCbXhSj7YSYJhTIeSDzij1knNUS/jwbRXrvhVfw6ySoKcVCBHvVf+6vYVS2OUlglqTCfwExtmVFvOBE5L3dRgQtmYDrHjqKQxmjCbXzslZ07pk4HSrqQlc/X3R EZjYyZx5Dpjakdm2ZuJ/3md1A6uw4zLJLUo2WLRIBXEKjJ7nfS5RmbFxBHKNHe3EjaimjLrAiq5EILl1dJ86Ia+NXg/rJSu8njKMIJnMI5BHAFNbiDOjSAwSM8wyu8ecp78d69j 0VrwctnjuEPvM8fmPDw=</latexit> hi2 <latexit sha1_base64="oU4N2owU9MPtx7wPH567xUBE4=">AB7XicbVDLSgNBEOz1Ge Mr6tHLYBA8hd0g6DHoxWME84BkCbOTjJmdmaZmRXCkn/w4kERr/6PN/GSbIHTSxoKq6e6KEsGN9f1vb219Y3Nru7BT3N3bPzgsHR03jUo1wZTQul2RA0KLrFhuRXYTjTSOBLYisa3M7/1h NpwJR/sJMEwpkPJB5xR6TmqJfx6rRXKvsVfw6ySoKclCFHvVf6vYVS2OUlglqTCfwExtmVFvOBE6L3dRgQtmYDrHjqKQxmjCbXzsl507pk4HSrqQlc/X3REZjYyZx5Dpjakdm2ZuJ/3md1A6u w4zLJLUo2WLRIBXEKjJ7nfS5RmbFxBHKNHe3EjaimjLrAiq6EILl1dJs1oJ/Epwf1mu3eRxFOAUzuACAriCGtxBHRrA4BGe4RXePOW9eO/ex6J1zctnTuAPvM8f36PEA=</latexit> <latexit sha1_base64="oU4N2owU9MPtx7wPH567xUBE4=">AB7XicbVDLSgNBEOz1Ge Mr6tHLYBA8hd0g6DHoxWME84BkCbOTjJmdmaZmRXCkn/w4kERr/6PN/GSbIHTSxoKq6e6KEsGN9f1vb219Y3Nru7BT3N3bPzgsHR03jUo1wZTQul2RA0KLrFhuRXYTjTSOBLYisa3M7/1h NpwJR/sJMEwpkPJB5xR6TmqJfx6rRXKvsVfw6ySoKclCFHvVf6vYVS2OUlglqTCfwExtmVFvOBE6L3dRgQtmYDrHjqKQxmjCbXzsl507pk4HSrqQlc/X3REZjYyZx5Dpjakdm2ZuJ/3md1A6u w4zLJLUo2WLRIBXEKjJ7nfS5RmbFxBHKNHe3EjaimjLrAiq6EILl1dJs1oJ/Epwf1mu3eRxFOAUzuACAriCGtxBHRrA4BGe4RXePOW9eO/ex6J1zctnTuAPvM8f36PEA=</latexit> <latexit sha1_base64="oU4N2owU9MPtx7wPH567xUBE4=">AB7XicbVDLSgNBEOz1Ge Mr6tHLYBA8hd0g6DHoxWME84BkCbOTjJmdmaZmRXCkn/w4kERr/6PN/GSbIHTSxoKq6e6KEsGN9f1vb219Y3Nru7BT3N3bPzgsHR03jUo1wZTQul2RA0KLrFhuRXYTjTSOBLYisa3M7/1h NpwJR/sJMEwpkPJB5xR6TmqJfx6rRXKvsVfw6ySoKclCFHvVf6vYVS2OUlglqTCfwExtmVFvOBE6L3dRgQtmYDrHjqKQxmjCbXzsl507pk4HSrqQlc/X3REZjYyZx5Dpjakdm2ZuJ/3md1A6u w4zLJLUo2WLRIBXEKjJ7nfS5RmbFxBHKNHe3EjaimjLrAiq6EILl1dJs1oJ/Epwf1mu3eRxFOAUzuACAriCGtxBHRrA4BGe4RXePOW9eO/ex6J1zctnTuAPvM8f36PEA=</latexit> <latexit sha1_base64="oU4N2owU9MPtx7wPH567xUBE4=">AB7XicbVDLSgNBEOz1Ge Mr6tHLYBA8hd0g6DHoxWME84BkCbOTjJmdmaZmRXCkn/w4kERr/6PN/GSbIHTSxoKq6e6KEsGN9f1vb219Y3Nru7BT3N3bPzgsHR03jUo1wZTQul2RA0KLrFhuRXYTjTSOBLYisa3M7/1h NpwJR/sJMEwpkPJB5xR6TmqJfx6rRXKvsVfw6ySoKclCFHvVf6vYVS2OUlglqTCfwExtmVFvOBE6L3dRgQtmYDrHjqKQxmjCbXzsl507pk4HSrqQlc/X3REZjYyZx5Dpjakdm2ZuJ/3md1A6u w4zLJLUo2WLRIBXEKjJ7nfS5RmbFxBHKNHe3EjaimjLrAiq6EILl1dJs1oJ/Epwf1mu3eRxFOAUzuACAriCGtxBHRrA4BGe4RXePOW9eO/ex6J1zctnTuAPvM8f36PEA=</latexit> ... <latexit sha1_base64="Ul1zjHMk9xZA8oyRXJLJOx n4TQw=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0hE0GPRi8eK9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x 2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z+oHh61TJpxpskYnuhNRwKRvokDJO6nmNA4lb4fj25n fuLaiEQ94iTlQUyHSkSCUbTSg+u6/WrNc705yCrxC1KDAo1+9as3SFgWc4VMUmO6vpdikFONgk+rfQyw1PKxnT Iu5YqGnMT5PNTp+TMKgMSJdqWQjJXf0/kNDZmEoe2M6Y4MsveTPzP62YXQe5UGmGXLHFoiTBMy+5sMhOYM5c QSyrSwtxI2opoytOlUbAj+8surpHXh+p7r31/W6jdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0lpxi5h j+wPn8AUvkjSI=</latexit> <latexit sha1_base64="Ul1zjHMk9xZA8oyRXJLJOx n4TQw=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0hE0GPRi8eK9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x 2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z+oHh61TJpxpskYnuhNRwKRvokDJO6nmNA4lb4fj25n fuLaiEQ94iTlQUyHSkSCUbTSg+u6/WrNc705yCrxC1KDAo1+9as3SFgWc4VMUmO6vpdikFONgk+rfQyw1PKxnT Iu5YqGnMT5PNTp+TMKgMSJdqWQjJXf0/kNDZmEoe2M6Y4MsveTPzP62YXQe5UGmGXLHFoiTBMy+5sMhOYM5c QSyrSwtxI2opoytOlUbAj+8surpHXh+p7r31/W6jdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0lpxi5h j+wPn8AUvkjSI=</latexit> <latexit sha1_base64="Ul1zjHMk9xZA8oyRXJLJOx n4TQw=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0hE0GPRi8eK9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x 2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z+oHh61TJpxpskYnuhNRwKRvokDJO6nmNA4lb4fj25n fuLaiEQ94iTlQUyHSkSCUbTSg+u6/WrNc705yCrxC1KDAo1+9as3SFgWc4VMUmO6vpdikFONgk+rfQyw1PKxnT Iu5YqGnMT5PNTp+TMKgMSJdqWQjJXf0/kNDZmEoe2M6Y4MsveTPzP62YXQe5UGmGXLHFoiTBMy+5sMhOYM5c QSyrSwtxI2opoytOlUbAj+8surpHXh+p7r31/W6jdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0lpxi5h j+wPn8AUvkjSI=</latexit> <latexit sha1_base64="Ul1zjHMk9xZA8oyRXJLJOx n4TQw=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0hE0GPRi8eK9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x 2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z+oHh61TJpxpskYnuhNRwKRvokDJO6nmNA4lb4fj25n fuLaiEQ94iTlQUyHSkSCUbTSg+u6/WrNc705yCrxC1KDAo1+9as3SFgWc4VMUmO6vpdikFONgk+rfQyw1PKxnT Iu5YqGnMT5PNTp+TMKgMSJdqWQjJXf0/kNDZmEoe2M6Y4MsveTPzP62YXQe5UGmGXLHFoiTBMy+5sMhOYM5c QSyrSwtxI2opoytOlUbAj+8surpHXh+p7r31/W6jdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0lpxi5h j+wPn8AUvkjSI=</latexit> xi <latexit sha1_base64="fhCcXp/PIFsLr08ImxUPJe8y0A=">AB6nicdVDJSgNB EK2JW4xb1KOXxiB4GnqGRJNb0IvHiGaBZAg9nZ6kSc9Cd48YhnyCFw+KePWLvPk3dhZBR8UPN6roqenwiuNMYfVm5ldW19I79Z2Nre2d0r7h+0VJxKypo0FrHs+EQxwSPW1FwL 1kI6EvWNsfX8789h2TisfRrZ4kzAvJMOIBp0Qb6ea+z/vFEraxW62UXYRt4JrTs2QCnZqZ2Xk2HiOEizR6Bfe4OYpiGLNBVEqa6DE+1lRGpOBZsWeqliCaFjMmRdQyMSMuV l81On6MQoAxTE0lSk0Vz9PpGRUKlJ6JvOkOiR+u3NxL+8bqDqpfxKEk1i+hiUZAKpGM0+xsNuGRUi4khEpubkV0RCSh2qRTMCF8fYr+Jy3XdrDtXJdL9YtlHk4gmM4BQfOoQ 5X0IAmUBjCAzBsyWsR+vFel205qzlzCH8gPX2Cch/jiA=</latexit> <latexit sha1_base64="fhCcXp/PIFsLr08ImxUPJe8y0A=">AB6nicdVDJSgNB EK2JW4xb1KOXxiB4GnqGRJNb0IvHiGaBZAg9nZ6kSc9Cd48YhnyCFw+KePWLvPk3dhZBR8UPN6roqenwiuNMYfVm5ldW19I79Z2Nre2d0r7h+0VJxKypo0FrHs+EQxwSPW1FwL 1kI6EvWNsfX8789h2TisfRrZ4kzAvJMOIBp0Qb6ea+z/vFEraxW62UXYRt4JrTs2QCnZqZ2Xk2HiOEizR6Bfe4OYpiGLNBVEqa6DE+1lRGpOBZsWeqliCaFjMmRdQyMSMuV l81On6MQoAxTE0lSk0Vz9PpGRUKlJ6JvOkOiR+u3NxL+8bqDqpfxKEk1i+hiUZAKpGM0+xsNuGRUi4khEpubkV0RCSh2qRTMCF8fYr+Jy3XdrDtXJdL9YtlHk4gmM4BQfOoQ 5X0IAmUBjCAzBsyWsR+vFel205qzlzCH8gPX2Cch/jiA=</latexit> <latexit sha1_base64="fhCcXp/PIFsLr08ImxUPJe8y0A=">AB6nicdVDJSgNB EK2JW4xb1KOXxiB4GnqGRJNb0IvHiGaBZAg9nZ6kSc9Cd48YhnyCFw+KePWLvPk3dhZBR8UPN6roqenwiuNMYfVm5ldW19I79Z2Nre2d0r7h+0VJxKypo0FrHs+EQxwSPW1FwL 1kI6EvWNsfX8789h2TisfRrZ4kzAvJMOIBp0Qb6ea+z/vFEraxW62UXYRt4JrTs2QCnZqZ2Xk2HiOEizR6Bfe4OYpiGLNBVEqa6DE+1lRGpOBZsWeqliCaFjMmRdQyMSMuV l81On6MQoAxTE0lSk0Vz9PpGRUKlJ6JvOkOiR+u3NxL+8bqDqpfxKEk1i+hiUZAKpGM0+xsNuGRUi4khEpubkV0RCSh2qRTMCF8fYr+Jy3XdrDtXJdL9YtlHk4gmM4BQfOoQ 5X0IAmUBjCAzBsyWsR+vFel205qzlzCH8gPX2Cch/jiA=</latexit> <latexit sha1_base64="fhCcXp/PIFsLr08ImxUPJe8y0A=">AB6nicdVDJSgNB EK2JW4xb1KOXxiB4GnqGRJNb0IvHiGaBZAg9nZ6kSc9Cd48YhnyCFw+KePWLvPk3dhZBR8UPN6roqenwiuNMYfVm5ldW19I79Z2Nre2d0r7h+0VJxKypo0FrHs+EQxwSPW1FwL 1kI6EvWNsfX8789h2TisfRrZ4kzAvJMOIBp0Qb6ea+z/vFEraxW62UXYRt4JrTs2QCnZqZ2Xk2HiOEizR6Bfe4OYpiGLNBVEqa6DE+1lRGpOBZsWeqliCaFjMmRdQyMSMuV l81On6MQoAxTE0lSk0Vz9PpGRUKlJ6JvOkOiR+u3NxL+8bqDqpfxKEk1i+hiUZAKpGM0+xsNuGRUi4khEpubkV0RCSh2qRTMCF8fYr+Jy3XdrDtXJdL9YtlHk4gmM4BQfOoQ 5X0IAmUBjCAzBsyWsR+vFel205qzlzCH8gPX2Cch/jiA=</latexit> Figure 1: Architecture of the TreeLSTM-based autoencoder with a discretization bottleneck for learning the sentence codes. where the internal implementation of the recurrent function is same as Eq. 1, however, each node has a different parameterization depending on its position among siblings. Note that in the decoder side, no input vectors are fed to the recurrent computation. Finally, the decoder states are used to predict target labels, whereas the model is optimized with cross-entropy loss. As the source sentence already provides hints on the target-side sentence structure, we feed the source information to the tree auto-encoder to encourage the latent representation to capture the syntax that cannot be inferred from the source sentence. To obtain the sentence codes from the latent tree representation, we apply improved semantic hashing (Kaiser and Bengio, 2018) to the hidden state of the root node, which discretizes the vector into a 8-bit code (binary vector). When performing improved semantic hashing, the forward pass computes two operations: binarization and saturated sigmoid, resulting in two vectors. One of these two vectors are randomly selected for the next computation. However, in the backward pass, the gradient always flows through the vector produced by saturated sigmoid. As the model is trained together with the bottleneck, the codes are optimized directly to minimize the loss function. 1825 2.2 Diverse Generation with Code Assignment Once we obtain the sentence codes, we prefix the target-side sentences in the training data with the corresponding codes. The resultant target sentence has a form of “⟨c12⟩⟨eoc⟩Here is a translation.”. The “⟨eoc⟩” token separates the code and words. We train a regular NMT model with the modified training dataset. To generate diverse translations, we first obtain top-K codes from the probability distribution of code prediction. In detail, we select K sentence codes with the highest probabilities. Then, conditioning on each code, we let the beam search continue to generate the sentence, resulting in K translations conditioned on different codes. 3 Related Work Existing works for diverse text generation can be categorized into two major categories. The approaches in the first categoriy sample diverse sequences by varying a hidden representation. Jain et al. (2017) generates diverse questions by injecting Gaussian noise to the latent in a VAE for encouraging the creativity of results. Xu et al. (2018) learns K shared decoders, conditioned on different pattern rewriting embeddings. The former method is evaluated by assessing the ability of generating unique and unseen results, whereas the latter is evaluated with the number of unique uni/bi-grams and the divergence of word distributions produced by different decoders. Independent to this work, Shen et al. (2019) also explores mixture-of-expert models with an ensemble of learners. The paper discusses multiple training strategies and found the multiple choice learning works best. The second category of approaches attempts to improve the diversity by improving decoding algorithms. Li et al. (2016) modifies the scoring function in beam search to encourage the algorithm to promote hypotheses containing words from different ancestral hypotheses, which is also evaluated with the number of unique uni/bi-grams. Kulikov et al. (2018) uses an iterative beam search approach to generate diverse dialogs. Comparing to these works, we focus on generating translations with different sentence structures. We still use beam search to search for best words in every decoding steps under the constraint of code assignment. Our approach also comes with the advantage that no modification to the NMT model architecture is required. 4 Experiments 4.1 Experimental Settings We evaluate our models on two machine translation datasets: ASPEC Japanese-to-English dataset (Nakazawa et al., 2016) and WMT14 Germanto-English dataset. The datasets contain 3M and 4.5M bilingual pairs respectively. For the ASPEC Ja-En dataset, we use the Moses toolkit (Koehn et al., 2007) to tokenize the English side and Kytea (Neubig et al., 2011) to tokenize the Japanese side. After tokenization, we apply byte-pair encoding (Sennrich et al., 2016) to segment the texts into subwords, forcing the vocabulary size of each language to be 40k. For WMT14 De-En dataset, we use sentencepiece (Kudo and Richardson, 2018) to segment the words to ensure a vocabulary size of 32k. In evaluation, we report tokenized BLEU for ASPEC Ja-En dataset. For WMT14 De-En dataset, BLEU scores are generated using SacreBleu toolkit (Post, 2018). For models that produce sentence codes during decoding, the codes are removed from translation results before evaluating BLEU scores. 4.2 Obtaining Sentence Codes For the semantic coding model based on BERT, we cluster the hidden state of “[CLS]” token into 256 clusters with k-means algorithm. The cluster ids are then used as sentence codes. For models using FastText Embeddings, pre-trained vectors (Common Crawl, 2M words) are used. Please note that the number of clusters is a hyperparameter, here we choose the number of clusters to match the number of unique codes in the syntax-based model. To train the syntax coding model, we parse target-side sentences with Stanford CFG parser (Klein and Manning, 2003). The TreeLSTMbased auto-encoder is implemented with DGL,1 which is trained using AdaGrad optimizer for faster convergence. We found it helpful to pretrain the model without the discretization bottleneck for achieving higher label accuracy. 1https://www.dgl.ai/ 1826 Model BLEU Oracle DP ASPEC Ja →En Transformer Baseline 27.1 22.4 +Diverse Dec (Li et al., 2016) 26.9 26.2 + Random Codes 27.0 4.9 + Semantic Coding (BERT) 26.8 28.8 30.6 + Semantic Coding (FastText) 27.3 28.5 31.1 + Syntactic Coding 27.4 29.5 39.8 WMT14 De →En Transformer Baseline 29.4 28.2 +Diverse Dec (Li et al., 2016) 29.1 31.0 + Random Codes 29.5 3.8 + Semantic Coding (BERT) 29.3 29.4 21.7 + Semantic Coding (FastText) 28.5 29.2 28.8 + Syntactic Coding 29.3 30.7 33.0 Table 1: Results for different approaches. The BLEU(%) are reported for the first sampled candidate. Oracle BLEU scores are produced with reference codes. Diversity scores (DP) are evaluated with Eq. 3. 4.3 Quantitive Evaluation of Diversity As we are interested in the diversity among sampled candidates, the diversity metric based on the divergence between word distributions (Xu et al., 2018) can not be applied in this case. In order to qualitatively evaluate the diversity of generated translations, we propose to use a BLEU-based discrepancy metric. Suppose Y is a list of candidate translations, we compute the diversity score with DP(Y ) = 1 |Y |(|Y | −1) X y∈Y X y′∈Y,y′̸=y 1 −∆(y, y′), (3) where ∆(y, y′) returns the BLEU score of two candidates. The equation gives a higher diversity score when each candidate contains more unique n-grams. 4.4 Experiment Results We use Base Transformer architecture (Vaswani et al., 2017) for all models. The results are summarized in Table 1. We sample three candidates with different models, and report the averaged diversity score. The BLEU(%) is reported for the candidate with highest confidence (log-probability). A detailed table with BLEU scores of all three candidates can be found in supplementary material, Tg 以上の温度でI を消去できた。(Japanese) A 1. It is possible to eliminate I at temperatures above Tg . 2. It is possible to eliminate I at temperatures higher than Tg . 3. It is possible to eliminate I at the temperature above Tg . B 1. above Tg , I was able to be eliminated . 2. It was found that the photoresists were eliminated at temperatures above Tg . 3. at the temperature above Tg , I was able to be eliminated . C 1. I could be eliminated at temperatures above Tg . 2. I was removed at temperatures above Tg . 3. It was possible to eliminate I at temperatures above Tg . Table 2: A comparison of candidates produced by beam search (A), semantic coding model based on BERT (B) and syntactic coding model (C) in Ja-En task where the BLEU scores of the second and third candidates are on par with the baseline. We compare the proposed approach to three baselines. The first baseline samples three candidates using standard beam search. We also tested the diverse decoding approach (Li et al., 2016). The coefficient γ is chosen to maximize the diversity with no more than 0.5 BLEU degradation. The third baseline uses random codes for conditioning. As shown in the table, the model based on BERT sentence embeddings achieves higher diversity in ASPEC dataset, which contains only formal texts. However, it fails to deliver similar results in WMT14 dataset, which is more informal. This may be due to the difficulty in clustering BERT vectors which were never trained to work with clustering. The model using FastText embeddings is shown to be more robust across the datasets, although it also fails to outperform the diverse decoding baseline in WMT14 dataset. In contrast, syntax-based models achieve much higher diversity in both datasets. We found the results generated by this model has more diverse structures rather than word choices. By comparing the BLEU scores, no significant degradation is observed in translation quality. As a control experiment, using random codes does not contributes to the diversity. As a confirmation that the sentence codes have strong impact on sentence generation, the models using codes derived from references (oracle codes) achieve much higher BLEU scores. 1827 5 Analysis and Conclusion Table 2 gives samples of the candidate translations produced by the models conditioning on different discrete codes, compared to the candidates produced by beam search. We can see that the candidate translations produced by beam search has only minor grammatical differences. In contrast, the translation results sampled with the syntactic coding model have drastically different grammars. By examining the results, we found the syntaxbased model tends to produce one translation in active voice and another in passive voice. To summarize, we show a diverse set of translations can be obtained with sentence codes when a reasonable external mechanism is used to produce the codes. When a good syntax parser exists, the syntax-based approach works better in terms of diversity. The source code for extracting discrete codes from parse trees will be publicly available. Acknowledgement The research results have been achieved by ”Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation”, the Commissioned Research of National Institute of Information and Communications Technology (NICT), JAPAN. This work was partially supported by JSPS KAKENHI Grant Number JP16H05872, Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Electronics (Improving Deep Learning using Latent Structure). References Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Unnat Jain, Ziyu Zhang, and Alexander G. Schwing. 2017. Creativity: Generating diverse questions using variational autoencoders. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5415–5424. Lukasz Kaiser and Samy Bengio. 2018. Discrete autoencoders for sequence models. CoRR, abs/1801.09797. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In ACL. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL. Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In EMNLP. Ilya Kulikov, Alexander H. Miller, Kyunghyun Cho, and Jason Weston. 2018. Importance of a search strategy in neural dialogue modelling. CoRR, abs/1811.00907. Jiwei Li, Will Monroe, and Daniel Jurafsky. 2016. A simple, fast diverse decoding algorithm for neural generation. CoRR, abs/1611.08562. Toshiaki Nakazawa, Manabu Yaguchi, Kiyotaka Uchimoto, Masao Utiyama, Eiichiro Sumita, Sadao Kurohashi, and Hitoshi Isahara. 2016. Aspec: Asian scientific paper excerpt corpus. In LREC. Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable japanese morphological analysis. In ACL, pages 529–533. Matt Post. 2018. A call for clarity in reporting bleu scores. In WMT. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. CoRR, abs/1508.07909. Tianxiao Shen, Myle Ott, Michael Auli, and Marc’Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in neural information processing systems, pages 801–809. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In ACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Qiongkai Xu, Juyan Zhang, Lizhen Qu, Lexing Xie, and Richard Nock. 2018. D-page: Diverse paraphrase generation. CoRR, abs/1808.04364.
2019
177
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1828–1834 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1828 Self-Supervised Neural Machine Translation Dana Ruiter Cristina Espa˜na-Bonet Josef van Genabith Saarland University Saarland University Saarland University DFKI GmbH DFKI GmbH [email protected] {cristinae,Josef.Van Genabith}@dfki.de Abstract We present a simple new method where an emergent NMT system is used for simultaneously selecting training data and learning internal NMT representations. This is done in a self-supervised way without parallel data, in such a way that both tasks enhance each other during training. The method is language independent, introduces no additional hyper-parameters, and achieves BLEU scores of 29.21 (en2fr) and 27.36 (fr2en) on newstest2014 using English and French Wikipedia data for training. 1 Introduction Neural machine translation (NMT) has brought major improvements in translation quality (Cho et al., 2014; Bahdanau et al., 2014; Vaswani et al., 2017). Until recently, these relied on the availability of high-quality parallel corpora. As such corpora exist only for a few high-resource language combinations, overcoming this constraint by either extracting parallel data from non-parallel sources or developing unsupervised techniques in NMT is crucial to cover all languages. Obtaining comparable corpora is becoming easier (Paramita et al., 2019) and extracting parallel sentences from them a wide research field. Most of the methods estimate similarities between fragments to select pairs. Here we focus on similarities estimated from NMT representations. The strength of NMT embeddings as semantic representations was first shown qualitatively in Sutskever et al. (2014); Ha et al. (2016) and Johnson et al. (2017), and used for estimating semantic similarities at sentence level in Espa˜na-Bonet and Barr´on-Cede˜no (2017) for example. In a systematic study, Espa˜na-Bonet et al. (2017) show that cosine similarities between context vectors discriminate between parallel and non-parallel sentences already in the first stages of training. Other approaches perform max-pooling over encoder outputs (Schwenk, 2018; Artetxe and Schwenk, 2018) or calculate the mean of word embeddings (Bouamor and Sajjad, 2018) to extract pairs. On the other hand, unsupervised NMT is now achieving impressive results using large amounts of monolingual data and small parallel lexicons (Lample et al., 2018a; Artetxe et al., 2018b; Yang et al., 2018). These systems rely on very strong language models and back-translation, and build complex architectures that combine denoising autoencoders, back-translation steps and shared encoders among languages. The most successful architectures also use SMT phrase tables, standalone or in combination with NMT (Lample et al., 2018b; Artetxe et al., 2018a). In our approach, we propose a new and simpler method without a priori parallel corpora. Our premise is that NMT systems —either sequence to sequence models with RNNs, transformers, or any architecture based on encoder–decoder models— already learn strong enough representations of words and sentences to judge on-line if an input sentence pair is useful or not. Our approach resembles self-supervised learning (Raina et al., 2007; Bengio et al., 2013), i.e. learning a primary task where labelled data is not directly available but where the data itself provides a supervision signal for another auxiliary task which lets the network learn the primary one. In our case this comes with a twist: we find cross-lingually close sentences as an auxiliary task for learning MT and learning MT as an auxiliary task for finding cross-lingually close sentences in a mutually self-supervised loop: in effect a doubly virtuous circle. Our approach is also related to unsupervised NMT but differs in important aspects: since in our case there is no back-translation involved, the original corpus must contain similar sentences, 1829 therefore the use of comparable corpora is recommended to speed up the training. In the following, we describe the approach (Section 2) and the experiments in which it is going to be tested (Section 3). Section 4 reviews the results and, finally, we summarise and sketch future work in Section 5. 2 Joint Model Architecture Without loss of generality, we consider a bidirectional NMT system {L1, L2}→{L1, L2} where the encoder and decoder have the information of both languages L1 and L2. The bidirectionality is simply achieved by tagging the source sentence with the target language as done by Johnson et al. (2017) in their multilingual systems and inputting sentence pairs in both directions. Two dimensions determine our architectures: (i) the specific representation of an input sentence, and (ii) the similarity or score function for an input sentence pair. We focus on two different embedding spaces in the encoder to build semantic sentence representations: the sum of word embeddings (Ce) and the hidden states of an RNN or the encoder outputs of a transformer (Ch). We define: Ce = T X t=1 et, Ch = T X t=1 ht, (1) where et is the word embedding at time step t and ht its hidden state (RNN) or encoder output (transformer). In case ht is an RNN hidden state, it is further defined by the concatenation of its forward and backward component hRNN t = [−→h t; ←−h t]. These representations are used to score input sentence pairs. We study two functions for sentence selection with the aim of exploring whether a threshold-free selection method is viable. Let SL1 and SL2 be the vector representations for each sentence of a pair (either Ce or Ch). The cosine similarity of a sentence pair is calculated as the dot product of their representations: sim(SL1, SL2) = SL1 · SL2 ∥SL1∥∥SL2∥, (2) which is bounded in the [-1, 1] range. However, the threshold to decide when to accept a pair is not straightforward and might depend on the language pair and the corpus (Espa˜na-Bonet et al., 2017; Artetxe and Schwenk, 2018). Besides, even if the measure does not depend on the length of the sentences, it might be scaled differently for different sentences. To solve this, Artetxe and Schwenk (2018) proposed a margin-based function: margin(SL1, SL2) = sim(SL1, SL2) avrkNN(SL1, Pk)/2 + avrkNN(SL2, Qk)/2, (3) where avrkNN(X, Yk) corresponds to the average similarity between a sentence X and kNN(X), its k nearest neighbors Yk in the other language: avrkNN(X, Yk) = X Y ∈kNN(X) sim(X, Y ) k . (4) This scoring method penalises sentences which have a generally high cosine similarity with several candidates. Following Artetxe and Schwenk (2018), we use k = 4 in our experiments. In the selection process that follows, we consider four strategies. In all of them, sim(SL1, SL2) and margin(SL1, SL2) can be used for scoring. (i) Threshold dependent. We find the highest scoring target sentence for each source sentence (pair i) as well as the highest scoring source for each target sentence (pair j) for either representation S=Ch or S=Ce (systems H and E respectively in the experiments). Since often i ̸= j, the process is not symmetric and only pairs that have been matched during selection in both language directions are accepted to the candidate list. A threshold is empirically determined to filter out false positives. (ii) High precision, medium recall. (system P) We apply the same methodology as before, but we use both representations S=Ch and S=Ce. Only pairs that have been matched during selection in both language directions and both representation types are accepted to the candidate list. Ch and Ce turn out to be complementary and this further restriction allows us to get rid of the threshold, and the sentence selection becomes parameter-free. (iii) Medium precision, high recall. (system R) The combination of representations is a key point for a threshold-free method, but the final selection becomes very restrictive. In order to increase recall, we are more permissive with the way we select pairs and instead of taking only the highest scoring target sentence for each source sentence we take the top-n (n=2 in our experiments). We still use both representations and extend the 1830 number of candidates considered only for S=Ch, which is the most restrictive factor at the beginning of training. (iv) Low precision, high recall. Generalisation of the previous strategy where we make the method symmetric in source–target and Ch–Ce. 3 Experimental Setting Data. We use Wikipedia (WP) dumps1 in English (en) and French (fr), and pre-process the articles and split the text into sentences using the Wikitailor toolkit2 (Barr´on-Cede˜no et al., 2015). We further tokenise and truecase them using standard Moses scripts (Koehn et al., 2007) and apply a byte-pair encoding (Sennrich et al., 2016) of 100 k merge operations trained on the concatenation of English and French data. We also remove duplicates and discard sentences with more than 50 tokens for training the MT systems. We fix these settings as a comparison point for all the experiments even though smaller vocabularies and longer sentences might imply the extraction of more parallel sentences (see Section 4). We use newstest2012 for validation and newstest2014 for testing. WP dumps are used for two different purposes in our systems: (i) to calculate initial word embeddings and (ii) as training corpus. In the first case, we use the complete editions (92 M sentences / 2.247 M tokens in en and 27 M / 652 M in fr). In the second case, we select only the subset of articles that can be linked among languages using Wikipedia’s langlinks with Wikitailor, i.e., we only take an article if there is the equivalent article in the other language. For this, the total amount of sentences (tokens) is 12 M (318 M) for en and 8 M (207 M) for fr. Model Specifications. We implemented3 the architecture described in Section 2 within the OpenNMT toolkit (Klein et al., 2017) both for RNN and Transformer encoders, and trained: LSTMsimP: 1-layer bidirectional encoder with LSTM units, additive attention, 512-dim word embeddings and hidden states, and an initial learning rate (λ) of 0.5 with SGD. Ce and Ch are both used 1We use WP editions downloaded in Jan. 2015 from https://dumps.wikimedia.org/ 2https://github.com/cristinae/ WikiTailor 3https://github.com/ruitedk6/ comparableNMT as representations in the high precision mode and sim(SL1, SL2) as scoring function. LSTMmargP: The same as LSTMsimP but margin(SL1, SL2) as scoring function. LSTMmargR: The same as LSTMmargP but Ce and Ch are used in the high recall mode. LSTMmargH: As LSTMmargP with Ch as only representation. A hard threshold of 1.0 is used. LSTMmargE: As LSTMmargP with Ce as only representation. A hard threshold of 1.2 is used. Transformer: Transformer base as defined in Vaswani et al. (2017) with 6-layer encoder– decoder with 8-head self-attention, 512-dim word embeddings and a 2048-dim hidden feed-forward. Adam optimisation with λ=2 and beta2=0.998; noam λ decay with 8000 warm-up steps. Labels are smoothed (ϵ=0.1) and a dropout mask (p=0.1) is applied. The five models described in the LSTM category have transformer counterparts which follow the same transformer base architecture. All systems are trained on a single GPU GTX TITAN using a batch size of 64 (LSTM) or 50 (transformer) sentences. 4 Results and Discussion In order to train the 10 NMT systems, we initialise the word embeddings following Artetxe et al. (2017) using a seed dictionary of 2.591 numerals automatically extracted from our Wikipedia editions, and feed the system directly with comparable articles. This avoids the n × m explosion of possible combinations of sentences, where n is the number of sentences in L1 and m in L2. In our approach, we input P article ni × mj sentence pairs, that is, only all possible source–target sentence combinations within two articles linked by Wikipedia’s langlinks. Hence we miss the parallel sentences in non-linked articles but we win in speed. Articles are input in lots4. For them, the appropriate representation and scoring function are applied. Sentence pairs accepted by the selection method within a lot are extracted. Whenever enough parallel sentences are available to create a training batch, a training step is performed. Embeddings are modified by back-propagation and 4Since margin(SL1, SL2) takes into account the knearest neighbors of each sentence, small input lots lead to scarce information when selecting pairs. Considering lots with more than 15 sentences avoids the problem. 1831 0 1 2 3 4 5 6 7 Epochs 0.5 1.0 1.5 2.0 Number of Pai s (Millions) Δ 0Δ10 Δ 0Δ27 Δ 0Δ07 Δ 0Δ34 Δ 0Δ16 Δ 0Δ34 Δ 0Δ29 Δ 0Δ33 Δ 0Δ32 Δ 0Δ34 Δ 0Δ32 Δ 0Δ34 T ansfo me LSTM Figure 1: Number of unique accepted sentence pairs over the first 6 epochs for both margP systems. Points are labeled with the difference between the average margin scores of accepted and rejected pairs. the next lot of articles is processed with the improved representations. Notice that the extracted pairs may therefore differ through iterations, since it is the sentence representation at the specific training step that is responsible for the selection. Figure 1 shows the number of unique pairs selected during the first six epochs of training for both LSTMmargP and TransformermargP. The number of accepted sentences increases throughout the epochs, and so does the number of unique sentences used in training. Especially the first iteration over the data set is vital for improving and adapting the representations to the data itself. This quadruples the number of unique sentences accepted in the second pass over the data. While sentences are still able to pass from rejected to accepted as training advances, the two distributions are pushed apart and the gap in average margin scores between the two distributions (∆) increases as the representations get better at discriminating. We observe curriculum learning in the process: at the beginning (epoch 1) simple sentences with anchors (mostly homographs such as numbers, named entities, acronyms...) are selected but as training progresses, complex semantically equivalent sentences are extracted too. Curriculum learning is important since once the capacity of a neural architecture is exhausted, more data does not improve the performance. This self-supervised architecture not only selects the data but it does it in the most useful way for the learning. It remains to be checked whether smaller vocabularies and therefore a larger number of common BPE sub-units modifies the distribution of selected sen2 4 6 8 10 Epochs 0 5 10 15 20 25 30 BLEU en2fr fr2en Figure 2: BLEU scores of TransformermargP on newstest2014 as training progresses. tences especially at the beginning of training. These trends are common to all our models with small nuances due to the concrete architectures. Transformers generally accumulate more unique pairs before convergence than their LSTM counterparts for example, but other than this the behaviour is the same. To validate our method, we carry out a control experiment on parallel data (Europarl) where we scramble the target sentences, creating pseudo-comparable data with a ratio of 1:5 between parallel and unrelated sentences. On this data, we can measure precision and recall and we observe how our approach progresses towards high values for these scores in both margP and margR systems. These experiments also validate the nomenclature used in Section 2: TransformermargR reaches higher levels of recall than TransformermargP (98.4% vs. 95.3%) at the cost of a lower precision (73.9% vs. 94.7%). The major increment in data through training leads to a higher translation quality as measured by BLEU, so extraction and training in a loop enhance each other’s performance. Figure 2 shows the progressive improvement in translation performance throughout the training process of system TransformermargP and, again, the trend is general. Table 1 summarises the final performance of our 10 systems according to BLEU. The first thing to point out is that the difference between sim(SL1, SL2) and margin(SL1, SL2) is clear and margin outperforms sim by more than 13 and 4 BLEU points for the LSTM and Transformer models respectively. The differences among the representations used with the same scoring function are not so big but still relevant. Single representation 1832 Corpus, BLEU Reference en+fr sent. en2fr fr2en (in millions) Unsupervised NMT Artetxe et al. (2018b) NCr13, 99+32 15.13 15.56 Lample et al. (2018a) WMT, 16+16 15.05 14.31 Yang et al. (2018) WMT, 16+16 16.97 15.58 Self-supervised NMT LSTMsimP WP, 12+8 10.48 10.97 LSTMmargE WP, 12+8 13.71 14.26 LSTMmargH WP, 12+8 21.50 20.84 LSTMmargP WP, 12+8 23.64 22.95 LSTMmargR WP, 12+8 20.05 19.45 TransformersimP WP, 12+8 25.21 24.96 TransformermargE WP, 12+8 27.33 25.87 TransformermargH WP, 12+8 24.45 23.83 TransformermargP WP, 12+8 29.21 27.36 TransformermargR WP, 12+8 28.01 26.78 Unsupervised NMT+SMT Artetxe et al. (2018a) NCr13, 99+32 26.22 25.87 Lample et al. (2018b) NCr17,358+69 28.10 27.20 Table 1: BLEU scores achieved on newstest2014 with multi-bleu.perl. Training corpora differ by various authors: News Crawl 2007–2013 (NCr13), 2007– 2017 (NCr17), the full WMT data and Wikipedia (WP). models margE and margH (only word embeddings or encoder outputs) are 2–10 BLEU points below systems that combine both representations. It should be noted that such single representation systems can perform comparatively well (see TransformermargH) if the threshold is optimally set. However, this is not guaranteed even with a preceding exploration of the threshold parameter. In margP and margR, the combinations of representations do not need such hyper-parameters and achieve the best translation quality. The best system, TransformermargP, focuses on extracting parallel sentences with high precision and obtains BLEU scores of 29.21 (en2fr) and 27.36 (fr2en) with a total of 2.4 M selected unique sentence pairs. When increasing recall, too few new parallel sentences are gained as compared to the new false positives to improve the final translation, and TransformermargR and LSTMmargR are ∼1–3 BLEU points below their medium recall counterparts. Notice that we do not include the Low precision, high recall strategy since the effect is even more pronounced. Table 1 also presents a comparison with related work on unsupervised NMT. The comparison is delicate because training corpora and methodology differ. If we compare the final performance, we observe that we achieve similar results with less data (us vs. Lample et al. (2018b)); and when the same order of magnitude of sentences is used we obtain significantly better results (us vs. Lample et al. (2018a) and Yang et al. (2018)). The crucial difference here is that in one case one needs monolingual data, whereas we are using comparable corpora. 5 Conclusions and Future Work We present a joint architecture to select data and train NMT systems simultaneously using the emerging NMT system itself to select the data. This is a form of self-supervision alternating between two tasks that support each other in an incremental fashion. We focus on data representation, an adequate function for the selection process, and studying how to avoid additional hyperparameters that depend on the input corpus. The key point of our approach is the combination of a margin-based score with the intersection of sentence representations for filtering the input corpus. As future work, we will apply our methodology to domain adaptation. In this setting, word embeddings and hidden layers are already initialised via standard NMT training on parallel data and training is continued with an in-domain monolingual or comparable corpus. Our architecture is also useful for data selection in data rich language pairs and we will perform experiments on cleaning noisy parallel corpora. In the same vain as unsupervised MT, we want to continue our research by using back translation for rejected pairs and dealing with phrases instead of full sentences. That will allow us to extract more parallel text from a corpus and facilitate using these approaches for low-resourced languages. Existing approaches make use of huge amounts of monolingual (∼100 M, references in Table 1) or comparable (∼10 M, this work) sentences and these numbers are still far from what one can gather in a truly low-resource scenario. Acknowledgments The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee) and by the Leibniz Gemeinschaft via the SAW-2016-ZPID-2 project (CLuBS). Responsibility for the content of this publication is with the authors. 1833 References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632–3642, Brussels, Belgium. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised neural machine translation. In Proceedings of the Sixth International Conference on Learning Representations, ICLR. Mikel Artetxe and Holger Schwenk. 2018. Marginbased parallel corpus mining with multilingual sentence embeddings. arXiv preprint arXiv:1811.01136. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), San Diego, CA. Alberto Barr´on-Cede˜no, Cristina Espa˜na-Bonet, Josu Boldoba, and Llu´ıs M`arquez. 2015. A factory of comparable corpora from Wikipedia. In Proceedings of the Eighth Workshop on Building and Using Comparable Corpora, pages 3–13. Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):1798–1828. Houda Bouamor and Hassan Sajjad. 2018. H2@BUCC18: Parallel Sentence Extraction from Comparable Corpora Using Multilingual Sentence Embeddings. In 11th Workshop on Building and Using Comparable Corpora, page 43. Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations Using RNN Encoder– Decoder for Statistical Machine Translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 1724–1734, Doha, Qatar. Association for Computational Linguistics. Cristina Espa˜na-Bonet and Alberto Barr´on-Cede˜no. 2017. Lump at SemEval-2017 Task 1: Towards an Interlingua Semantic Similarity. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 144–149, Vancouver, Canada. Association for Computational Linguistics. Cristina Espa˜na-Bonet, Ad´am Csaba Varga, Alberto Barr´on-Cede˜no, and Josef van Genabith. 2017. An empirical analysis of NMT-derived interlingual embeddings and their use in parallel sentence identification. IEEE Journal of Selected Topics in Signal Processing, 11(8):1340–1350. Thanh-Le Ha, Jan Niehues, and Alexander H. Waibel. 2016. Toward Multilingual Neural Machine Translation with Universal Encoder and Decoder. In Proceedings of the International Workshop on Spoken Language Translation, Seattle, WA. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B. Vi´egas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. Transactions of the Association for Computational Linguist, 5:339–351. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67–72. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180. Association for Computational Linguistics. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In Proceedings of the Sixth International Conference on Learning Representations, ICLR. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049. Association for Computational Linguistics. Monica Lestari Paramita, Ahmet Aker, Paul Clough, Robert Gaizauskas, Nikos Glaros, Nikos Mastropavlos, Olga Yannoutsou, Radu Ion, Dan S¸tef˘anescu, Alexandru Ceaus¸u, Dan Tufis¸, and Judita Preiss. 2019. Using Comparable Corpora for UnderResourced Areas of Machine Translation, chapter Collecting Comparable Corpora. Springer, Cham. Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. 2007. Self-taught learn1834 ing: Transfer learning from unlabeled data. In Proceedings of the 24th International Conference on Machine Learning, ICML’07, pages 759–766, New York, NY, USA. ACM. Holger Schwenk. 2018. Filtering and mining parallel data in a joint multilingual space. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 228–234. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, (ACL 2016), Volume 1: Long Papers, pages 1715–1725, Berlin, Germany. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Unsupervised neural machine translation with weight sharing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 46–55. Association for Computational Linguistics.
2019
178
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1835–1841 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1835 Exploring Phoneme-Level Speech Representations for End-to-End Speech Translation Elizabeth Salesky1, Matthias Sperber2, Alan W Black1 1Carnegie Mellon University, USA 2Karlsruhe Institute of Technology, Germany [email protected] Abstract Previous work on end-to-end translation from speech has primarily used frame-level features as speech representations, which creates longer, sparser sequences than text. We show that a na¨ıve method to create compressed phoneme-like speech representations is far more effective and efficient for translation than traditional frame-level speech features. Specifically, we generate phoneme labels for speech frames and average consecutive frames with the same label to create shorter, higher-level source sequences for translation. We see improvements of up to 5 BLEU on both our high and low resource language pairs, with a reduction in training time of 60%. Our improvements hold across multiple data sizes and two language pairs. 1 Introduction The way translation input is represented has been shown to impact performance as well as how much data the model requires to train (Sennrich et al., 2016; Salesky et al., 2018; Cherry et al., 2018). The current standard approach for textbased translation is to segment words into subword units as a preprocessing step (Sennrich et al., 2016). Clustering common character sequences increases frequency of units in data and improves generalization to new word forms and rare words. End-to-end speech-to-text models are showing competitive results (Weiss et al., 2017; Bansal et al., 2018a,b; B´erard et al., 2018; Anastasopoulos and Chiang, 2018), but so far have not compared different ways to represent speech input. Unlike text, where discrete trainable embeddings are typically used, speech models typically use continuous features extracted from sliding windows (frames), held fixed during training. Framelevel features yield significantly longer, more sparsely-represented sequences than their text equivalents, and so speech models stand to benefit from learning compressed input representations. Previous works have reduced sequence lengths to make training more tractable through fixed-length downsampling. However, phonemes are variable lengths. Other work has shown promising results using phonemic representations and unsupervised term discovery from variable length sequences in MT and other domains, but as discrete units (Wilkinson et al., 2016; Bansal et al., 2017; Adams et al., 2016; Kamper et al., 2016; Dalmia et al., 2018b; Chung and Glass, 2018). Inspired by these works, we explore higher-level continuous speech embeddings for end-to-end speech translation. Specifically, we use alignment methods to generate phoneme labels, and average consecutive frames with the same label to create phonemelike feature vectors from variable numbers of frames. We use the Fisher Spanish-English and low-resource Mboshi-French datasets. We compare performance on the full Fisher dataset to smaller subsets as in Bansal et al. (2018b). As it is not possible to train a high-performing recognizer on many lower-resource tasks, we use a highresource model applied cross-lingually to create phoneme labels for Mboshi. We show significant performance improvements and reductions in training time under all conditions, demonstrating phoneme-informed speech representations are an effective and efficient tool for speech translation. 2 Method While frame-level Mel-frequency cepstral coefficient (MFCC) and filterbank features are informative, they create long, repetitive sequences which take recurrent models many examples to learn to model. Higher-level representations like phonemes can create shorter, better-represented input sequences to improve training efficiency and 1836 Figure 1: Example comparing number of frame-level features (50) to phoneme alignments (8). We saw an average reduction in sequence length of ∼80%. model robustness. Here, we average frame-level features within phoneme-like units to create one representation from a variable number of frames, using a trained speech recognizer and alignment. We extract 40-dimensional Mel filterbank features with per-speaker mean and variance normalization using Kaldi (Povey et al., 2011). Using an HMM/DNN system trained on the full Fisher Spanish dataset using the Kaldi (Povey et al., 2011) recipe for Fisher Spanish, we compute phoneme alignments using the triphone model (tri3a). 50 phoneme labels are used, including variants of silence, noise, and laughter. Within each utterance, we average the feature vectors for consecutive frames with the same label. The above method requires a recognizer with reasonable performance to perform alignment, not possible in low-resource conditions. Therefore, for Mboshi, we use a method that does not require language-specific data to generate a phoneme-like sequence. Specifically, we apply a Connectionist Temporal Classification (CTC) model trained with 6000 hours of English data (notably, not a related language), as described in Dalmia et al. (2018b) with the features from Dalmia et al. (2018a). To train this model, three frame-level features are spliced together, so output labels apply to a span of three frames. Labels comprise a set of 40 phonemes and the CTC ‘blank’ where the model is uncertain. The CTC ‘blank’ transition label enables all frames to be aligned to a label. As above, we average the feature vectors for consecutive frames with the same label within an utterance. 3 Model Architecture As in Bansal et al. (2018a), we use a sequenceto-sequence architecture inspired by Weiss et al. but modified to train within available resources; specifically, all models may be trained in less than 5 days on one GPU. We build an encoderdecoder model with attention in xnmt (Neubig et al., 2018) with 512 hidden units throughout. We use a 3-layer BiLSTM encoder. We do not use the additional convolutional layers from Weiss et al. and Bansal et al. to reduce temporal resolution, but rather use network-in-network (NiN) projections from previous work in sequence-to-sequence ASR (Zhang et al., 2017; Sperber et al., 2018) to get the same total 4× downsampling in time. This gives the benefit of added depth with fewer parameters. We compare our performance to these two works in Section 5.1. We closely follow the LSTM/NiN encoder used in Sperber et al. (2018) for ASR and use the same training procedure, detailed in Appendix A. We use an MLP attention with 1 hidden layer with 128 units and 64-dimensional target embeddings, though we use only 1 decoder hidden layer as opposed to 3 or 4 in previous works. All models use the same target preprocessing as previous work on this dataset: lowercasing and removing punctuation aside from apostrophes. 4 Datasets Spanish-English. We use the Fisher Spanish speech corpus (Graff et al.), which consists of 160 hours of telephone speech in multiple Spanish dialects split into 138K utterances, translated via crowdsourcing by Post et al. (2013). We use the standard dev and test sets, each with ∼4k utterances. We do not use dev2. Four reference translations are used to score dev and test. Mboshi-French. Mboshi is a Bantu language spoken in the Republic of Congo with ∼160k speakers. We use the Mboshi-French parallel corpus (Godard et al., 2017) for our low-resource setting, which contains <5 hours of speech split into training and development sets of 4616 and 500 utterances respectively. This corpus does not have a designated test set, so as in Bansal et al. (2018b) we removed 200 randomly sampled utterances from training for development data and use the designated development set as test. 5 Results 5.1 Baseline We first compare our model to previously reported end-to-end neural speech translation results on the Fisher Spanish-English task using frame-level features. Table 1 shows our results on the full training set with comparisons to Weiss et al. (2017) and Bansal et al. (2018a). Weiss et al.’s model is 1837 Weiss et al. Bansal et al. Ours dev test dev test dev test BLEU 46.5 47.3 29.5 29.4 32.4 33.7 Table 1: Single task end-to-end speech translation BLEU scores on full dataset. significantly deeper than ours, with 4 more encoder layers and 3 more decoder layers. After more than two weeks of expensive multi-GPU training, it reaches a 4-reference BLEU score of 47.3 on test. We, like Bansal et al. (2018a,b), made modifications to our architecture and training schemes to train on a single GPU in approximately five days. While Bansal et al. use words on the target side to reduce time to convergence at a slight performance cost, we are able to use characters as in Weiss et al. by having a still shallower architecture (2 fewer layers on both the encoder and decoder), which allows us to translate to characters with approximately the same training time per epoch they observe with words (∼2 hours). We converge to a four-reference test BLEU of 33.7, showing 3-4 BLEU improvements over Bansal et al. (2018a) on dev and test. This demonstrates that our model has reasonable performance, providing a strong baseline before turning to our targeted task comparing input representations. 5.2 Frames vs Phonemes On our target task, we compare different subsets of the data to see how our method compares under different data conditions, using the full 160 hours as well as 40 and 20 hour subsets. Table 2 shows our results using frame vs phoneme-level speech input. When we use our phoneme-like embeddings, we see relative performance improvements of 13% on all data sizes, or up to 5.2 BLEU on the full dataset. Further, in reducing source lengths by ∼80%, training time is improved. We saw an average reduction in training time of 61%, which for the full dataset means we were able to train our model in 39.5 hours rather than 118.2. Frames Phonemes BLEU Time Data dev test dev test ∆ ∆ Full 32.4 33.7 37.6 38.8 +5.2 –67% 40hr 19.5 17.4 21.0 19.8 +2.0 –52% 20hr 9.8 8.9 11.1 10.0 +1.2 –65% Table 2: Comparison of frame vs phoneme input on Spanish-English SLT, with average BLEU improvement and average reduction in training time. We compare our variable-length downsampling to fixed-stride downsampling by striding input frames. With a fixed stride of 2, performance decreases on 40 hours by ∼2 BLEU from 19.5 to 17.0 on dev and 17.4 to 15.6 on test. With a fixed stride of 3, performance drops further to 13.7 and 11.8, respectively. By contrast, we saw improvements of +2 BLEU on 40 hours using our variablelength downsampling, though it lead to greater reductions in the number of input feature vectors. Clearly phoneme-informed reduction is far more effective than fixed schedule downsampling. 5.3 Analysis To better understand our improvements, we target three points. Does making source and target sequence lengths more well-matched improve performance? To test we compare target preprocessing granularities. Second, reducing source lengths will impact both the encoder and attention. To investigate, we look at both encoder downsampling and ASR, where unlike MT, sequences are monotonic. Finally, we look at our low-resource case, Mboshi-French, where we must get phoneme labels from a cross-lingual source. Previous work on sequence-to-sequence speech translation has used encoder downsampling of 4×, while 8× is more common among sequence-tosequence ASR systems (Zhang et al., 2017), motivated by reducing parameters and creating more one-to-one relationships between lengths of target sequence (typically characters) and the final encoder states to which the model attends. We use encoder downsampling of 4×, concatenating adjacent states after each layer. Table 3 shows target sequence lengths and results with different preprocessing. By averaging frames per local phoneme label in addition to encoder downsampling, source sequence lengths are further reduced on average by 79%, yielding final encoder state lengths of 22, closest in length to 1k BPE targets (14) rather than characters (50). Given that the 1k BPE model perTarget Target Frames Phonemes Preproc. Length dev test dev test chars 50.2 18.8 17.3 20.0 18.4 1k bpe 13.7 19.5 17.4 21.0 19.8 10k bpe 10.6 16.2 14.7 18.4 17.5 words 10.4 16.4 14.6 18.2 17.4 Table 3: Comparing effects of target preprocessing with different sources on BLEU, Spanish-English 40hr 1838 forms best, it does appear that more similar source and target lengths boost performance. For Spanish, we found that the mean number of frames per phone was 7.6, while the median was 6. Silence in this dataset skews these statistics higher; silence-marked frames account for 10.7% of phone occurrences. Reducing multiple frames per phone to a single feature vector allows faster parameter optimization, as shown by improvements in early epochs in Figure 2. Figure 2: Dev BLEU over training with frames vs phonemes. Single-reference BLEU on 1k lines of dev. We also compare the best phoneme models without encoder downsampling; with reduced sequence lengths, this becomes more tractable to train. On the full data, we see this improves our scores slightly, from 37.6 to 38.1 on dev and 38.8 to 39.2 on test. We see further improvements on 40 hours (22.4 dev & 20.3 test), and on 20 hours, similar dev performance but slight improvements on test (10.3 dev & 9.6 test). It is possible that with less data, the additional encoder parameters without downsampling do not receive enough updates to be well-trained. To test whether the approach is a generally more effective input representation or only an aid in the particularly complex learning task of speech translation where it helps to reduce the distance between inputs and outputs, we apply our method to ASR, where are alignments are monotonic. We see similar levels of improvement, suggesting this approach produces generally more effective input representations: ∼18% relative improvements on all three dataset sizes, or up to –9 absolute WER on 40 and 20 hours, as detailed in Table 4. We note that Weiss et al. (2017) reported 25.7, 23.2 on dev and test, respectively, with a considerably larger network, which we are now able to match on test. We note that this neural model also outFrames Phonemes WER Time Data dev test dev test ∆ ∆ Full 33.4 30.0 28.0 23.4 –6.0 –43% 40hr 44.8 46.7 36.6 36.6 –9.2 –40% 20hr 56.3 59.1 48.2 49.1 –9.1 –50% Table 4: Comparison of frame vs phoneme input on Spanish ASR, with average reduction in WER and average reduction in training time. performs the Kaldi models; the Kaldi model using the tri3a alignments we use for phoneme boundaries yields 45.7 dev WER, and using more sophisticated alignment models, achieves 29.8. On our low-resource Mboshi task, we do not have enough data to train a high-quality recognizer to produce phoneme alignments. Instead, we use a model from an unrelated language (English) applied cross-lingually. With small training and evaluation sets, scores are less stable and changes must be taken with a grain of salt. We see very low scores with frames, but still see improvements with phonemes, though the labels were produced by an English model. Bansal et al. (2018b) reported 3.5 BLEU using frames, which they improved to 7.1 by pretraining their encoder with 300 hours of English and decoder with 20 hours of French. Creating phoneme-level embeddings, we are able to get similar levels of improvement without training the network on more data, though we use an unadapted foreign language model. Frames Phonemes Data dev test dev test Mboshi (chars) 0.0 0.0 5.2 3.6 Mboshi (1k bpe) 2.3 1.4 7.0 5.6 Mboshi (words) 1.8 1.4 7.8 5.9 Table 5: Comparison of frame vs phoneme input on Mboshi-French SLT. Mboshi phoneme labels produced with English CTC phoneme recognizer. While LSTM-based sequence-to-sequence models are able to learn from long, redundant sequences, we show that they learn more efficiently and effectively across multiple data conditions when given sequences reduced using phoneme boundaries. This is evidenced by our improvements across all data sizes, and significant improvements in early epochs, shown in Figure 2. 1839 We compared two methods for alignment, an HMM-based model and a CTC-based model, the first applied monolingually and the second crosslingually. The CTC model yields blank alignments for some frames, reducing the range of frames to be averaged, though the center of mass often remains the same. We hypothesize that this does not greatly impact results, and previous work has explored using the middle HMM state for alignments rather than all (Stuker et al., 2003), but this would benefit from a more robust comparison. As well, a deeper comparison of monolingual versus crosslingual alignments applied to a greater number of test languages would be beneficial. 6 Conclusion Previous work on end-to-end speech translation has used frame-level speech features. We have shown that a na¨ıve method to create higher-level speech representations for translation can be more effective and efficient than traditional frame-level features. We compared two input representations for two unrelated languages pairs, and a variety of differently-resourced conditions, using both a supervised alignment method and a cross-lingual method for our low-resource case. Our method does not introduce additional parameters: we hope to motivate future work on learning speech representations, with continued performance on lowerresource settings if additional parameters are introduced. Acknowledgements We would like to thank the anonymous reviewers for their helpful comments. References Oliver Adams, Graham Neubig, Trevor Cohn, Steven Bird, Quoc Truong Do, and Satoshi Nakamura. 2016. Learning a lexicon and translation model from phoneme lattices. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2377–2382. Antonios Anastasopoulos and David Chiang. 2018. Tied multitask learning for neural speech translation. arXiv preprint arXiv:1802.06655. Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2018a. Lowresource speech-to-text translation. Proc. of Interspeech. ArXiv:1803.09164. Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2018b. Pre-training on high-resource speech recognition improves low-resource speech-to-text translation. arXiv preprint arXiv:1809.01431. Sameer Bansal, Herman Kamper, Adam Lopez, and Sharon Goldwater. 2017. Towards speech-totext translation without speech recognition. arXiv preprint arXiv:1702.03856. Alexandre B´erard, Laurent Besacier, Ali Can Kocabiyikoglu, and Olivier Pietquin. 2018. End-toend automatic speech translation of audiobooks. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6224–6228. IEEE. William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on, pages 4960–4964. IEEE. Colin Cherry, George Foster, Ankur Bapna, Orhan Firat, and Wolfgang Macherey. 2018. Revisiting character-based neural machine translation with capacity and compression. arXiv:1808.09943. Yu-An Chung and James Glass. 2018. Speech2vec: A sequence-to-sequence framework for learning word embeddings from speech. arXiv preprint arXiv:1803.08976. Siddharth Dalmia, Xinjian Li, Florian Metze, and Alan W. Black. 2018a. Domain robust feature extraction for rapid low resource asr development. 2018 IEEE Workshop on Spoken Language Technology (SLT). Siddharth Dalmia, Ramon Sanabria, Florian Metze, and Alan W. Black. 2018b. Sequence-based multilingual low resource speech recognition. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4909–4913. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. Pierre Godard, Gilles Adda, Martine Adda-Decker, Juan Benjumea, Laurent Besacier, Jamison CooperLeavitt, Guy-No¨el Kouarata, Lori Lamel, H´el`ene Maynard, Markus M¨uller, et al. 2017. A very low resource language speech corpus for computational language documentation experiments. arXiv preprint arXiv:1710.03501. David Graff, Shudong Huang, Ingrid Cartagena, Kevin Walker, and Christopher Cieri. Fisher spanish speech (LDC2010S01). Https://catalog.ldc.upenn.edu/ ldc2010s01. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 1840 Herman Kamper, Aren Jansen, and Sharon Goldwater. 2016. Unsupervised word segmentation and lexicon discovery using acoustic word embeddings. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 24(4):669–679. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. Proc. of ICLR. ArXiv:1412.6980. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. Graham Neubig, Matthias Sperber, Xinyi Wang, Matthieu Felix, Austin Matthews, Sarguna Padmanabhan, Ye Qi, Devendra Singh Sachan, Philip Arthur, Pierre Godard, et al. 2018. Xnmt: The extensible neural machine translation toolkit. arXiv:1803.00188. Toan Q Nguyen and David Chiang. 2018. Improving lexical choice in neural machine translation. Proc. of NAACL HLT. ArXiv:1710.01329. Matt Post, Gaurav Kumar, Adam Lopez, Damianos Karakos, Chris Callison-Burch, and Sanjeev Khudanpur. 2013. Improved speech-to-text translation with the fisher and callhome spanish–english speech translation corpus. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. 2011. The kaldi speech recognition toolkit. Elizabeth Salesky, Andrew Runge, Alex Coda, Jan Niehues, and Graham Neubig. 2018. Optimizing segmentation granularity for neural machine translation. arXiv:1810.08641. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL. Matthias Sperber, Jan Niehues, Graham Neubig, Sebastian St¨uker, and Alex Waibel. 2018. Selfattentional acoustic models. Proc. of EMNLP. ArXiv:1803.09519. Sebastian Stuker, Tanja Schultz, Florian Metze, and Alex Waibel. 2003. Multilingual articulatory features. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03)., volume 1, pages I–I. IEEE. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. Ron J Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. 2017. Sequence-tosequence models can directly transcribe foreign speech. arXiv:1703.08581. Andrew Wilkinson, Tiancheng Zhao, and Alan W Black. 2016. Deriving phonetic transcriptions and discovering word segmentations for speech-tospeech translation in low-resource settings. In INTERSPEECH. Yu Zhang, William Chan, and Navdeep Jaitly. 2017. Very deep convolutional networks for end-to-end speech recognition. A Appendix. LSTM/NiN Encoder and Training Procedure Details A.1 Encoder Downsampling Procedure Weiss et al. (2017) and Bansal et al. (2018a) use two strided convolutional layers atop three bidirectional long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) layers to downsample input sequences in time by a total factor of 4. Weiss et al. (2017) additionally downsample feature dimensionality by a factor of 3 using a ConvLSTM layer between their convolutional and LSTM layers. This is in contrast to the pyramidal encoder (Chan et al., 2016) from sequence-to-sequence speech recognition, where pairs of consecutive layer outputs are concatenated before being fed to the next layer to halve the number of states between layers. To downsample in time we instead use the LSTM/NiN model used in Sperber et al. (2018) and Zhang et al. (2017), which stacks blocks consisting of an LSTM, a network-in-network (NiN) projection, layer batch normalization and then a ReLU non-linearity. NiN denotes a simple linear projection applied at every timestep, performing downsampling by a factor of 2 by concatenating pairs of adjacent projection inputs. The LSTM/NiN blocks are extended by a final LSTM layer for a total of three BiLSTM layers with the same total downsampling of 4 as Weiss et al. (2017) and Bansal et al. (2018a). These blocks give us the benefit of added depth with fewer parameters. A.2 Training Procedure We follow the training procedure from Sperber et al. (2018). The model uses variational recurrent dropout with probability 0.2 and target character dropout with probability 0.1 (Gal and Ghahramani, 2016). We apply label smoothing (Szegedy et al., 2016) and fix the target embedding norm to 1 (Nguyen and Chiang, 2018). For inference, we use a beam size of 15 and length normalization 1841 with exponent 1.5. We set the batch size dynamically depending on the input sequence length such that the average batch size was 36. We use Adam (Kingma and Ba, 2015) with initial learning rate of 0.0003, decayed by 0.5 when validation BLEU did not improve over 10 epochs initially and 5 epochs after the first decay. We do not use L2 weight decay or Gaussian noise, and use a single model replica. All models use the same preprocessing as previous work on this dataset: lowercasing and removing punctuation aside from apostrophes. We use input feeding (Luong et al., 2015), and we exclude utterances longer than 1500 frames to manage memory requirements.
2019
179
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 184–193 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 184 Bilingual Lexicon Induction with Semi-supervision in Non-Isometric Embedding Spaces Barun Patra⇤, Joel Ruben Antony Moniz⇤, Sarthak Garg⇤, Matthew R. Gormley, Graham Neubig Carnegie Mellon University {bpatra, jrmoniz, sarthakg, mgormley, gneubig}@cs.cmu.edu Abstract Recent work on bilingual lexicon induction (BLI) has frequently depended either on aligned bilingual lexicons or on distribution matching, often with an assumption about the isometry of the two spaces. We propose a technique to quantitatively estimate this assumption of the isometry between two embedding spaces and empirically show that this assumption weakens as the languages in question become increasingly etymologically distant. We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) — a semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique. Our proposed method obtains state of the art results on 15 of 18 language pairs on the MUSE dataset, and does particularly well when the embedding spaces don’t appear to be isometric. In addition, we also show that adding supervision stabilizes the learning procedure, and is effective even with minimal supervision.⇤ 1 Introduction Bilingual lexicon induction (BLI), the task of finding corresponding words in two languages from comparable corpora (Haghighi et al., 2008; Xing et al., 2015; Zhang et al., 2017a; Artetxe et al., 2017; Lample et al., 2018), finds use in numerous NLP tasks like POS tagging (Zhang et al., 2016), parsing (Xiao and Guo, 2014), document classification (Klementiev et al., 2012), and machine translation (Irvine and Callison-Burch, 2013; Qi et al., 2018). Most work on BLI uses methods that learn a mapping between two word embedding spaces ⇤Equal Contribution ⇤Code to replicate the experiments presented in this work can be found at https://github.com/joelmoniz/ BLISS. (Ruder, 2017), which makes it possible to leverage pre-trained embeddings learned on large monolingual corpora. A commonly used method for BLI, which is also empirically effective, involves learning an orthogonal mapping between the two embedding spaces (Mikolov et al. (2013a), Xing et al. (2015), Artetxe et al. (2016), Smith et al. (2017)). However, learning an orthogonal mapping inherently assumes that the embedding spaces for the two languages are isometric (subsequently referred to as the orthogonality assumption). This is a particularly strong assumption that may not necessarily hold true, and consequently we can expect methods relying on this assumption to provide sub-optimal results. In this work, we examine this assumption, identify where it breaks down, and propose a method to alleviate this problem. We first present a theoretically motivated approach based on the Gromov-Hausdroff (GH) distance to check the extent to which the orthogonality assumption holds (§2). We show that the constraint indeed does not hold, particularly for etymologically and typologically distant language pairs. Motivated by the above observation, we then propose a framework for Bilingual Lexicon Induction with Semi-Supervision (BLISS) (§3.2) Besides addressing the limitations of the orthogonality assumption, BLISS also addresses the shortcomings of purely supervised and purely unsupervised methods for BLI (§3.1). Our framework jointly optimizes for supervised embedding alignment, unsupervised distribution matching, and a weak orthogonality constraint in the form of a back-translation loss. Our results show that the different losses work in tandem to learn a better mapping than any one can on its own (§4.2). In particular, we show that two instantiations of the semi-supervised framework, corresponding to different supervised loss objectives, outperform their supervised and unsupervised counterparts on nu185 merous language pairs across two datasets. Our best model outperforms the state-of-the-art on 10 of 16 language pairs on the MUSE datasets. Our analysis (§4.4) demonstrates that adding supervision to the learning objective, even in the form of a small seed dictionary, substantially improves the stability of the learning procedure. In particular, for cases where either the embedding spaces are far apart according to GH distance or the quality of the original embeddings is poor, our framework converges where the unsupervised baselines fail to. We also show that for the same amount of available supervised data, leveraging unsupervised learning allows us to obtain superior performance over baseline supervised, semisupervised and unsupervised methods. 2 Isometry of Embedding Spaces Both supervised and unsupervised BLI often rely on the assumption that the word embedding spaces are isometric to each other. Thus, they learn an orthogonal mapping matrix to map one space to another Xing et al. (2015). This orthogonality assumption might not always hold, particularly for the cases when the language pairs in consideration are etymologically distant — Zhang et al. (2017b) and Søgaard et al. (2018) provide evidence of this by observing a higher Earth Mover’s distance and eigenvector similarity metric respectively between etymologically distant languages. In this work, we propose a novel way of a-priori analyzing the validity of the orthogonality assumption using the Gromov Hausdorff (GH) distance to check how well two language embedding spaces can be aligned under an isometric transformation†. The Hausdorff distance between two metric spaces is a measure of the worst case or the diametric distance between the spaces. Intuitively, it measures the distance between the nearest neighbours that are the farthest apart. Concretely, given two metric spaces X, and Y with a distance function d(., .), the Hausdorff distance is defined as: H(X, Y) = max{ sup x2X inf y2Y d(x, y), sup y2Y inf x2X d(x, y) }. (1) The Gromov-Hausdorff distance minimizes the Hausdorff distance over all isometric transforms †Note that since we mean center the embeddings, the orthogonal transforms are equivalent to isometric transforms between X and Y, thereby providing a quantitative estimate of the isometry of two spaces H(X, Y) = inf f,g H(f(X), g(Y)), (2) where f, g belong to set of isometric transforms. Computing the Gromov-Hausdorff distance involves solving hard combinatorial problems, which is intractable in general. Following Chazal et al. (2009), we approximate it by computing the Bottleneck distance between the two metric spaces (the details of which can be found in Appendix (§A.1)). As can be observed from Table 2, the GH distances are higher for etymologically distant language pairs. 3 Semi-supervised Framework In this section, we motivate and define our semisupervised framework for BLI. First we describe issues with purely supervised and unsupervised methods, and then lay the framework for tackling them along with orthogonality constraints. 3.1 Drawbacks of Purely Supervised and Unsupervised Methods Most purely supervised methods for BLI just use words in an aligned bilingual dictionary and do not utilize the rich information present in the topology of the embeddings’ space. Purely unsupervised methods, on the other hand, can suffer from poor performance if the distribution of the embedding spaces of the two languages are very different from each other. Moreover, unsupervised methods can successfully align clusters of words, but miss out on fine grained alignment within the clusters. We explicitly show the aforementioned problem of purely unsupervised methods with the help of the toy dataset shown in 1a, and 1b. In this dataset, due to the density difference between the two large blue clusters, unsupervised matching is consistently able to align them properly, but has trouble aligning the smaller embedded green and red sub-clusters. The correct transformation of the source space is a clockwise 90◦rotation followed by reflection along the x-axis. Unsupervised matching converges to this correct transformation only half of the time; in rest of the cases, it ignores the alignment of the sub-clusters and converges to a 90◦counter-clockwise transformation as shown in 1c. We also find evidence of this problem in the real datasets used in our experiments as shown in Ta186 (a) Source distribution (b) Target distribution (c) Misaligned source distribution Figure 1: A toy dataset demonstrating the shortcomings of unsupervised distribution matching. Fig. a) and b) show two different distributions (source and target respectively) over six classes. Classes 1 and 2; classes 3 and 4; classes 5 and 6 were respectively drawn from a uniform distribution over a sphere, rectangle and triangle respectively. Fig. c) shows the misprojected source distribution obtained from unsupervised distribution matching which fails to align with the target distribution of Fig. b). Source ! Target Incorrect Predicted aunt ! тетя бабушка (Grandmother) uruguay ! уругвая аргентины (Argentina) regiments ! полков кавалерийские (Cavalry) comedian ! комик актёр (Actor) Table 1: Words for which semi-supervised method predicts correctly, but unsupervised method doesn’t. The unsupervised method is able to guess the general family but fails to pinpoint exact match. ble 1. It can be seen that the unsupervised method aligns clusters of similar words, but is poor at fine grained alignment. We hypothesize that this problem can be resolved by giving it some supervision in the form of matching anchor points inside these sub-clusters, which correctly aligns them. Analogously, for the task of BLI, generating a small supervised seed lexicon for providing the requisite supervision, is generally feasible for most language pairs, through bilingual speakers, existing dictionary resources, or Wikipedia language links. 3.2 A Semi-supervised Framework In order to alleviate the problems with the orthogonality constraints, the purely unsupervised and supervised approaches, we propose a semisupervised framework, described below. Let X = {x1 . . . xn} and Y = {y1 . . . ym}, xi, yi 2 Rd be two sets of word embeddings from the source and target language respectively and let S = {(xs 1, ys 1) . . . (xs k, ys k)} denote the bilingual aligned word embeddings. For learning a linear mapping matrix W that maps X to Y we leverage unsupervised distribution matching, aligning known word pairs and a data-driven weak orthogonality constraint. Unsupervised Distribution Matching: Given all word embeddings X and Y, the unsupervised loss LW|D aims to match the distribution of both embedding spaces. In particular, for our formulation, we use an adversarial distribution matching objective, similar to the work of Lample et al. (2018). Specifically, a mapping matrix W from the source to the target is learned to fool a discriminator D, which is trained to distinguish between the mapped source embeddings WX = {Wx1 . . . Wxn} and Y. We parameterize our discriminator with an MLP, and alternatively optimize the mapping matrix and the discriminator with the corresponding objectives: LD|W = −1 n X xi2X log(1 −D(Wxi)) −1 m X xi2Y log D(xi) LW|D = −1 n X xi2X log D(Wxi) (3) Aligning Known Word Pairs: Given aligned bilingual word embeddings S, we aim to minimize a similarity function (fs) which maximizes the similarity between the corresponding matched pairs of words. Specifically, the loss is defined as: LW|S = −1 |S| X (xs i ,ys i )2S fs(Wxs i, ys i ) (4) 187 Weak Orthogonality Constraint: Given an embedding space X, we define a consistency loss that maximizes a similarity function fa between x and W T Wx, x 2 X. This cyclic consistency loss LW|O encourages orthogonality of the W matrix based on the joint optimization: LW|O = −1 |X| X xi2X fa(xi, W T Wxi) (5) The above loss term, used in conjunction with the supervised and unsupervised losses, allows the model to adjust the trade-off between orthogonality and accuracy based on the joint optimization. This is particularly helpful in the embedding spaces where the orthogonality constraint is violated (§4.4). Moreover, this data driven orthogonality constraint is more robust than an enforced hard constraint (A.3). The final loss function for the mapping matrix is: L = LW|D + LW|S + LW|O (6) LW|D enables the model to leverage the distributional information available from the two embedding spaces, thereby using all available monolingual data. On the other hand, LW|S allows for the correct alignment of labeled pairs when available in the form of a small seed dictionary. Finally, LW|O encourages orthogonality. One can think of LW|O and LW|S as working against each other when the spaces are not isometric. Jointly optimizing both helps the model to strike a balance between them in a data driven manner, encouraging orthogonality but still allowing for flexible mapping. 3.3 Nearest Neighbor Retrieval For NN lookup, we use the CSLS distance defined by Lample et al. (2018). Let ΓA(b) be the average cosine similarity between b and it’s k-NN in A. Then CSLS is defined as CSLS(x, y) = 2cos(Wx, y) −ΓY(Wx) −ΓWX (y).⇤. 3.4 Iterative Procrustes Refinement and Hubness Mitigation A common method of improving BLI is iteratively expanding the dictionary and refining the mapping matrix as a post-processing step (Artetxe et al., 2017; Lample et al., 2018). Given a learnt mapping matrix, Procrustes refinement first finds ⇤WX denotes the set {Wx : x 2 X} the pair of points in the two languages that are very closely matched by the mapping matrix and constructs a bilingual dictionary from these pairs. These pair of points are found by considering the nearest neighbors (NN) of the projected source words in the target space. The mapping matrix is then refined by setting it to be the Procrustes solution of the dictionary obtained. Iterative Procrustes Refinement (also referred as Iterative Dictionary Expansion) applies the above step iteratively. However, learning an orthogonal linear map in such a way leads to some words (known as hubs) to become nearest neighbors of a majority of other words (Radovanovi´c et al., 2010; Dinu and Baroni, 2014). In order to estimate the hubness of a point, (Radovanovi´c et al., 2010) first compute Nx(k), the counts of all points y such that x 2 k−NN(y), normalized over all k. The skewness of the distribution over Nx(k) is defined as the hubness of the point, with positive skew representing hubs and negative skew representing isolated points. An approximation to this would be Nx(1), i.e the number of points for which x is the nearest neighbor. We use a simple hubness filtering mechanism to filter out words in the target domain that are hubs, i.e., words in the target domain which have more than a threshold number of neighbors in the source domain are not considered in the iterative dictionary expansion. Empirically, this leads to a small boost in performance. In our models, we use iterative Procrustes refinement with hubness filtering at each refinement step. 4 Experiments and Results In this section, we measure the GH distances between embedding spaces of various language pairs, and compute their correlation with several empirical measures of orthogonality. Next, we analyze the performance of the instantiations of our semi-supervised framework for two settings of supervised losses, and show that they outperform their supervised and unsupervised counterparts for a majority of the language pairs. Finally we analyze our performance with varying amounts of supervision and highlight the framework’s training stability over unsupervised methods. 4.1 Empirical Evaluation of GH Distance To evaluate the lower bound on the GH distance between the two embedding spaces, we select the 188 ru-uk en-fr en-es es-fr en-uk en-ru en-sv en-el en-hi en-ko |Corr| |Corr| (GH) (⇤) GH 0.18 0.17 0.2 0.24 0.34 0.44 0.46 0.47 0.5 0.92 * * ⇤ 16.4 4.1 5.9 4.1 11.7 14.7 7.3 11.5 7.7 6.6 * * MUSE(U) * 82.3 81.7 85.5 29.1 44.0 53.3 37.9 34.6 5.1 0.87 0.61 RCSLS * 83.3 84.1 87.1 38.3 57.9 61.7 47.6 37.3 37.5 0.74 0.52 GeoMM * 82.1 81.4 87.8 39.1 51.3 65.0 47.8 39.8 34.6 0.76 0.49 BLISS(R) * 83.9 84.3 87.1 40.7 57.1 65.1 48.5 38.1 39.9 0.73 0.50 ||I −W T W||2 0.03 0.01 0.03 0.02 59.8 54.3 71.6 72.6 106.3 98.46 0.84 0.75 Table 2: Correlation of GH and Eigenvector similarity with performance of BLI methods. Bold marks best metrics. 5000 most frequent words of the source and target language and compute the Bottleneck distance. These embeddings are mean centered, unit normed and the Euclidean distance is used as the distance metric. Row 1 of Table 2 summarizes the GH distances obtained for different language pairs. We find that etymologically close languages such as en-fr and ru-uk have a very low GH distance and can possibly be aligned well using orthogonal transforms. In contrast, we find that etymologically distant language pairs such as en-ru and en-hi cannot be aligned well using orthogonal transforms. To further corroborate this, similar to Søgaard et al. (2018) , we compute correlations of the GH distance with the accuracies of several methods for BLI. We find that the GH distance exhibits a strong negative correlation with these accuracies, implying that as the GH distance increases, it becomes increasingly difficult to align these language pairs. Søgaard et al. (2018) proposed the eigenvector similarity metric between embedding spaces for measuring similarity between the embedding spaces. We compute their metric over top n (100, 500, 1000, 5000 and 10000) embeddings (Column ⇤in Table 2 shows correlation for the best setting of n) and show that the GH distance (Column GH) correlates better with the accuracies than eigenvector similarity. Furthermore, we also compute correlations against an empirical measure of the orthogonality of two embedding spaces by computing ||I −W T W||2, where W is a mapping from one language to the other obtained from an unsupervised method (MUSE(U)). Note that an advantage of this metric is that it can be computed even when the supervised dictionaries are not available (ru-uk in Table 2). We obtain a strong correlation with this metric as well. 4.2 Benchmark Tasks: Setup Baseline Methods MUSE (U/S/IR): Lample et al. (2018) proposed two models: MUSE(U) and MUSE(S) for unsupervised and supervised BLI respectively. MUSE(U) uses a GAN based distribution matching followed by iterative Procrustes refinement. MUSE(S) learns an orthogonal map between the embedding spaces by minimizing the Euclidean distance between the supervised translation pairs. Note that for unit normed embedding spaces, this is equivalent to maximizing the cosine similarity between these pairs. MUSE(IR) is the semisupervised extension of MUSE(S), which uses iterative refinement using the CSLS distance starting from the mapping learnt by MUSE(S). We also use our proposed hubness filtering technique during the iterative refinement process (MUSE(HR)) which leads to small performance improvements. We consequently use the hubness filtering technique in all our models. RCSLS: Joulin et al. (2018) propose optimizing the CSLS distance‡ directly for the supervised matching pairs. This leads to significant improvements over MUSE(S) and achieves state of the art results for a majority of the language pairs at the time of writing. VecMap models: Artetxe et al. (2017) and Artetxe et al. (2018a) proposed two models, VecMap and VecMap++ which were based on Iterative Procrustes refinement starting from a small seed lexicon based on numeral matching. We also compare against two well known methods GeoMM (Jawanpuria et al., 2018) and Vecmap (U)++ (Artetxe et al., 2018b). These methods learn orthogonal mappings for both source and target spaces to a common embedding space, and ‡Since the CSLS distance requires computing the nearest neighbors over the whole embedding space, this can also be considered a semi-supervised method. 189 subsequently translate in the common space. BLISS models We instantiate two instances of our framework corresponding to the two supervised losses in the baseline methods mentioned above. BLISS(M) optimizes the cosine distance between supervised matching pairs as its supervised loss (LW|S), while BLISS(R) optimizes the CSLS distance between these matching pairs for its LW|S. We use the unsupervised CSLS metric as a stopping criterion during training. This metric, introduced by Lample et al. (2018), computes the average cosine similarity between matched source-target pairs using the CSLS distance for NN retrieval; and the authors showed that this correlates well with ground truth accuracy. After learning the final mapping matrix, the translations of the words in the source language are mapped to the target space and their nearest neighbors according to the CSLS distance are chosen as the translations. Datasets We evaluate our models against baselines on two popularly used datasets: the MUSE dataset and the VecMap dataset. The MUSE dataset used by Lample et al. (2018) consists of embeddings trained by Bojanowski et al. (2017) on Wikipedia and bilingual dictionaries generated by internal translation tools used at Facebook. The VecMap dataset introduced by Dinu and Baroni (2014) consists of the CBOW embeddings trained on the WacKy crawling corpora. The bilingual dictionaries were obtained from the Europarl word alignments. We use the standard training and test splits available for both the datasets. 4.3 Benchmark Tasks: Results In Tables 3 and 4, we group the instantiations of BLISS(M/R) with it’s supervised counterparts. We use † to compare models within a group, and use bold do compare across different groups for a language pair. As can be seen from Table 3, BLISS(M/R) outperform baseline methods within their groups for 9 of 10 language pairs. Moreover BLISS(R) gives the best accuracy across all baseline methods for 6 out of 10 language pairs. We observe a similar trend for the VecMap datasets, where BLISS(M/R) outperforms its supervised and unsupervised counterparts (Table 4). It can be seen that BLISS(M) and BLISS(R) outperform the MUSE baselines (MUSE(U), MUSE(R)) and RCSLS respectively. We observe that GeoMM and VecMap(U)++ outperform BLISS models on the VecMap datasets. A potential reason for this could be the slight disadvantage that BLISS suffers from because of translating in the target space, as opposed to in the common embedding space. This hypothesis is also supported by the results of Kementchedjhieva et al. (2018). All the hyperparameters for the experiments can be found in the Appendix (§A.4) 4.4 Benefits of BLISS Languages with high GH distance: As can be seen from Table 2, BLISS(R) substantially outperforms RCSLS on 6 of 9 language pairs, especially when the GH distance between the pairs is high (en-uk (2.4%), en-sv (3.4%), en-el (0.9%), en-hi(0.8%), en-ko (2.4%)). Results from Table 3 also underscores this point, wherein BLISS(R) performs at least at par with (and often better than) RCSLS on European languages, and performs significantly better on en-zh (2.8%) and zhen (0.9%). Performance with varying amount of supervision: Table 5 shows the performance of BLISS(R) as a function of the number of data points provided for supervision. As can be observed, the model performs reasonably well even for low amounts of supervision and outperforms the unsupervised baseline MUSE(U) and it’s supervised counterpart RCSLS. Moreover, note that the difference is more prominent for the etymologically distant pair en$zh. In this case the baseline models completely fail to train for 50 points, whereas BLISS(R) performs reasonably well. Stability of Training: We also observe that providing even a little bit of supervision helps stabilize the training process, when compared to purely unsupervised distribution matching. We measure the stability during training using both the ground truth accuracy and the unsupervised CSLS metric. As can be seen from Figure 2, BLISS(M) is significantly more stable than MUSE(U), converging to better accuracy and CSLS values. Furthermore, for en$zh, Vecmap(U)++ fails to converge, while MUSE is somewhat unstable. However, BLISS does not suffer from this issue. When the word vectors are not rich enough 190 Model Type Objective Translation en-es es-en en-fr fr-en en-de de-en en-ru ru-en en-zh zh-en Space MUSE(U) Unsup GAN target 81.7 83.3 82.3 82.1 74.0 72.2 44.0 59.1 32.5 31.4 MUSE(S) Sup Cos target 81.4 82.9 81.1 82.4 73.5 72.4 51.7 63.7 42.7† 36.7 MUSE(IR) Semi Cos + IR target 81.9 83.5 82.1 82.4 74.3 72.7 51.7 63.7 42.7† 36.7 MUSE(HR) Semi Cos + IR target 82.3† 83.3 82.5 83.2 75.7† 72.8 52.8 64.1† 42.7† 36.7 BLISS(M) Semi Cos + GAN target 82.3† 84.3† 83.3† 83.9† 75.7† 73.8† 55.7† 63.7 41.1 41.4† RCSLS Semi CSLS target 84.1 86.3† 83.3 84.1 79.1† 76.3 57.9† 67.2 45.9 46.4 BLISS(R) Semi CSLS + GAN target 84.3† 86.2 83.9† 84.7† 79.1† 76.6† 57.1 67.7† 48.7† 47.3† GeoMM Sup Classification common 81.4 85.5 82.1 84.1 74.7 76.7 51.3 67.6 49.1 45.3 Loss Vecmap(U)++ Unsup NN Based Dist common 82.2 84.5 82.5 83.6 75.2 74.2 48.5 65.1 0.0 0.0 matching + IR Table 3: Performance comparison of BLISS on the MUSE dataset. Sup, Unsup and Semi refer to supervised, unsupervised and semi-supervised methods. Objective refers to the metric optimized. † marks the best in each category, while bold marks the best performance across all groups for a language pair. Pairs # Vec Vec MUSE MUSE BLISS RCSLS BLISS GeoMM Vec seeds Map Map++ (U) (IR) (M) (R) Map(U)++ en-it all 39.7 45.3 45.8 45.3 45.9† 45.4 46.2† 48.3 48.5 Num. 37.3 45.8† 0.7 44.3 0.3 44.6† 1.2 48.5 en-de all 40.9 44.1 0.0 47.0 48.3† 47.3 48.1† 48.9 48.1 Num. 39.6 0.0 39.9 47.2† 1.0 46.5† 2.3 48.1 Table 4: Performance of different models on the VecMap dataset. † marks the best in each category, while bold marks the best performance across different levels of supervision for a language pair. # Datapoints Model en-es es-en en-fr fr-en en-de de-en en-ru ru-en en-zh zh-en * MUSE(U) 81.7 83.3 82.3 82.1 74.0 72.2 44.0 59.1 32.5† 31.4† * Vecmap(U)++ 82.2† 84.5† 82.5† 83.6† 75.2† 74.2† 48.5† 65.1† 0.0 0.0 50 MUSE(IR) 0.3 82.7 0.5 1.6 31.9 72.7† 0.1 0.0 0.3 0.3 GeoMM 0.3 1.9 0.3 1.0 0.3 0.3 0.0 0.6 0.0 0.0 RCSLS 0.1 0.4 0.0 0.3 0.1 0.1 0.1 0.1 0.0 0.0 BLISS (R) 82.1† 83.6† 82.8† 83.0† 75.1† 72.7† 39.3† 61.0† 32.6† 32.5† 500 MUSE(IR) 81.6 83.5† 82.1 82.0 73.1 72.7 40.3 62 34.5 32.2 GeoMM 31.9 46.6 34.4 44.7 13.5 14.7 10.6 20.5 3.9 2.9 RCSLS 22.9 44.9 22.4 43.5 9.9 10.2 7.9 19.6 6.6 7.1 BLISS(R) 82.3† 83.4 82.3† 82.9† 74.7† 73.1† 41.6† 63.0† 36.3† 35.1† 5000 MUSE(IR) 81.9 82.8 82.2 82.1 75.2 72.4 50.4 63.7 39.2 36.3 GeoMM 79.7 82.7 79.9 83.2 71.7 70.6 49.7 65.5† 43.7† 40.1 RCSLS 80.9 82.9 80.4 82.5 72.5 70.9 51.3 63.8 42.5 41.9 BLISS(R) 82.4† 84.9† 82.6† 83.9† 75.7† 72.5† 52.1† 65.2 42.5 42.8† Table 5: Performance with different levels of supervision. † marks the best performance at a given level of supervision, while bold marks the best for a language pair. (word2vec (Mikolov et al., 2013b) instead of fastText), the unsupervised method can completely fail to train. This can be observed for the case of en-de in Table 4. BLISS(M/R) does not face this problem: adding supervision, even in the form of 50 mapped words for the case of en-de, helps it to achieve reasonable performance. 5 Related Work Mikolov et al. (2013a) first used anchor points to align two embedding spaces, leveraging the fact that these spaces exhibit similar structure across languages. Since then, several approaches have been proposed for learning bilingual dictionaries (Faruqui and Dyer, 2014; Zou et al., 2013; Xing 191 Figure 2: Training Stability of different language pairs (en-de), (en-ru), (en-zh) et al., 2015). Xing et al. (2015) showed that adding an orthogonal constraint significantly improves performance, and admits a closed form solution. This was further corroborated by the work of Smith et al. (2017), who showed that in orthogonality was necessary for self-consistency. Artetxe et al. (2016) showed the equivalence between the different methods, and their subsequent work (Artetxe et al., 2018a) analyzed different techniques proposed in various works (like embedding centering, whitening etc.), and showed that leveraging a combination of different methods showed significant performance gains. However, the validity of this orthogonality assumption has of late come into question: Zhang et al. (2017b) found that the Wasserstein distance between distant language pairs was considerably higher , while Søgaard et al. (2018) explored the orthogonality assumption using eigenvector similarity. We find our weak orthogonality constraint (along the lines of Zhang et al. (2017a)) when used in our semi-supervised framework to be more robust to this. There has also recently been an increasing focus on generating these bilingual mappings without an aligned bilingual dictionary, i.e., in an unsupervised manner. Zhang et al. (2017a) and Lample et al. (2018) both use adversarial training for aligning two monolingual embedding spaces without any seed lexicon, while Zhang et al. (2017b) used a Wasserstein GAN to achieve this adversarial alignment, and use an earth-mover based finetuning approach; while Grave et al. (2018) formulate this as a joint estimation of an orthogonal matrix and a permutation matrix. However, we show that adding a little supervision, which is usually easy to obtain, improves performance. Another vein of research (Jawanpuria et al., 2018; Artetxe et al., 2018b; Kementchedjhieva et al., 2018) has been to learn orthogonal mappings from both the source and the target embedding spaces into a common embedding space and doing the translations in the common embedding space. Artetxe et al. (2017) and Søgaard et al. (2018) motivate the utility of using both the supervised seed dictionaries and, to some extent, the structure of the monolingual embedding spaces. They use iterative Procrustes refinement starting with a small seed dictionary to learn a mapping; but doing may lead to sub-optimal performance for distant language pairs. However, these methods are close to our methods in spirit, and consequently form the baselines for our experiments. Another avenue of research has been to try and modify the underlying embedding generation algorithms. Cao et al. (2016) modify the CBOW algorithm (Mikolov et al., 2013b) by augmenting the CBOW loss to match the first and second order moments from the source and target latent spaces, thereby ensuring the source and target embedding spaces follow the same distribution. Luong et al. (2015), in their work, use the aligned words to jointly learn the embedding spaces of both the source and target language, by trying to predict the context of a word in the other language, given an alignment. An issue with the proposed method is that it requires the retraining of embeddings, and cannot leverage a rich collection of precomputed vectors (like ones provided by Word2Vec (Mikolov et al., 2013b), Glove (Pennington et al., 2014) and FastText (Bojanowski et al., 2017)). 6 Conclusions In this work, we analyze the validity of the orthogonality assumption and show that it breaks for distant language pairs. We motivate the task of semisupervised BLI by showing the shortcomings of purely supervised and unsupervised approaches. We finally propose a semi-supervised framework which combines the advantages of supervised and 192 unsupervised approaches and uses a joint optimization loss to enforce a weak and flexible orthogonality constraint. We provide two instantiations of our framework, and show that both outperform their supervised and unsupervised counterparts. On analyzing the model errors, we find that a large fraction of them arise due to polysemy and antonymy (An interested reader can find the details in Appendix (§A.2). We also find that translating in a common embedding space, as opposed to the target embedding space, obtains orthogonal gains for BLI, and plan on investigating this in the semi-supervised setting in future work. Acknowledgements We would like to thank Sebastian Ruder and Anders Søgaard for their assistance in helping with the computation the eigenvector similarity metric. We would also like to thank Paul Michel and Junjie Hu for their invaluable feedback and discussions that helped shape the paper into its current form. Finally, we would also like to thank the anonymous reviewers for their valuable comments and helpful suggestions. References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2289–2294. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462, Vancouver, Canada. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 789–798. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Hailong Cao, Tiejun Zhao, Shu Zhang, and Yao Meng. 2016. A distribution-based model to learn bilingual word embeddings. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1818– 1827. Fr´ed´eric Chazal, David Cohen-Steiner, Leonidas J Guibas, Facundo M´emoli, and Steve Y Oudot. 2009. Gromov-hausdorff stable signatures for shapes using persistence. In Computer Graphics Forum, volume 28, pages 1393–1403. Wiley Online Library. Georgiana Dinu and Marco Baroni. 2014. Improving zero-shot learning by mitigating the hubness problem. volume abs/1412.6568. Herbert Edelsbrunner and Dmitriy Morozov. 2013. Persistent homology: theory and practice. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 462–471. Edouard Grave, Armand Joulin, and Quentin Berthet. 2018. Unsupervised alignment of embeddings with wasserstein procrustes. arXiv preprint arXiv:1805.11222. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of ACL08: HLT, pages 771–779, Columbus, Ohio. Association for Computational Linguistics. Ann Irvine and Chris Callison-Burch. 2013. Combining bilingual and comparable corpora for low resource machine translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation, pages 262–270, Sofia, Bulgaria. Association for Computational Linguistics. Pratik Jawanpuria, Arjun Balgovind, Anoop Kunchukuttan, and Bamdev Mishra. 2018. Learning multilingual word embeddings in latent metric space: a geometric approach. arXiv preprint arXiv:1808.08773. Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv´e J´egou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2979–2984. Yova Kementchedjhieva, Sebastian Ruder, Ryan Cotterell, and Anders Søgaard. 2018. Generalizing procrustes analysis for better bilingual dictionary induction. In Proceedings of the 22nd Conference on 193 Computational Natural Language Learning, pages 211–220. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. pages 1459–1474. Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In International Conference on Learning Representations. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151–159. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 529–535, New Orleans, Louisiana. Association for Computational Linguistics. Miloˇs Radovanovi´c, Alexandros Nanopoulos, and Mirjana Ivanovi´c. 2010. Hubs in space: Popular nearest neighbors in high-dimensional data. volume 11, pages 2487–2531. Sebastian Ruder. 2017. A survey of cross-lingual embedding models. CoRR, abs/1706.04902. Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 778–788. Min Xiao and Yuhong Guo. 2014. Distributed word representation learning for cross-lingual dependency parsing. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 119–129. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017a. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1959–1970. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Earth mover’s distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1934–1945. Yuan Zhang, David Gaddy, Regina Barzilay, and Tommi Jaakkola. 2016. Ten pairs to tagmultilingual pos tagging via coarse mapping between embeddings. Association for Computational Linguistics. Will Y Zou, Richard Socher, Daniel Cer, and Christopher D Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1393–1398.
2019
18
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1842–1861 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1842 Visually Grounded Neural Syntax Acquisition Haoyue Shi†,∗ Jiayuan Mao‡,∗ Kevin Gimpel† Karen Livescu† †: Toyota Technological Institute at Chicago, IL, USA ‡: ITCS, Institute for Interdisciplinary Information Sciences, Tsinghua University, China {freda, kgimpel, klivescu}@ttic.edu, [email protected] Abstract We present the Visually Grounded Neural Syntax Learner (VG-NSL), an approach for learning syntactic representations and structures without explicit supervision. The model learns by looking at natural images and reading paired captions. VG-NSL generates constituency parse trees of texts, recursively composes representations for constituents, and matches them with images. We define the concreteness of constituents by their matching scores with images, and use it to guide the parsing of text. Experiments on the MSCOCO data set show that VG-NSL outperforms various unsupervised parsing approaches that do not use visual grounding, in terms of F1 scores against gold parse trees. We find that VGNSL is much more stable with respect to the choice of random initialization and the amount of training data. We also find that the concreteness acquired by VG-NSL correlates well with a similar measure defined by linguists. Finally, we also apply VG-NSL to multiple languages in the Multi30K data set, showing that our model consistently outperforms prior unsupervised approaches.1 1 Introduction We study the problem of visually grounded syntax acquisition. Consider the images in Figure 1, paired with the descriptive texts (captions) in English. Given no prior knowledge of English, and sufficient such pairs, one can infer the correspondence between certain words and visual attributes, (e.g., recognizing that “a cat” refers to the objects in the blue boxes). One can further extract constituents, by assuming that concrete spans of words should be processed as a whole, and thus form the ∗HS and JM contributed equally to the work. 1 Project page: https://ttic.uchicago.edu/ ˜freda/project/vgnsl A cat stands under an umbrella. A cat is on the ground. A dog sits under an umbrella. Figure 1: We propose to use image-caption pairs to extract constituents from text, based on the assumption that similar spans should be matched to similar visual objects and these concrete spans form constituents. constituents. Similarly, the same process can be applied to verb or prepositional phrases. This intuition motivates the use of image-text pairs to facilitate automated language learning, including both syntax and semantics. In this paper we focus on learning syntactic structures, and propose the Visually Grounded Neural Syntax Learner (VG-NSL, shown in Figure 2). VG-NSL acquires syntax, in the form of constituency parsing, by looking at images and reading captions. At a high level, VG-NSL builds latent constituency trees of word sequences and recursively composes representations for constituents. Next, it matches the visual and textual representations. The training procedure is built on the hypothesis that a better syntactic structure contributes to a better representation of constituents, which then leads to better alignment between vision and language. We use no human-labeled constituency trees or other syntactic labeling (such as part-of-speech tags). Instead, we define a concreteness score of constituents based on their matching with images, and use it to guide the parsing of sentences. At test time, no images paired with the text are needed. We compare VG-NSL with prior approaches to unsupervised language learning, most of which 1843 do not use visual grounding. Our first finding is that VG-NSL improves over the best previous approaches to unsupervised constituency parsing in terms of F1 scores against gold parse trees. We also find that many existing approaches are quite unstable with respect to the choice of random initialization, whereas VG-NSL exhibits consistent parsing results across multiple training runs. Third, we analyze the performance of different models on different types of constituents, and find that our model shows substantial improvement on noun phrases and prepositional phrases which are common in captions. Fourth, VG-NSL is much more data-efficient than prior work based purely on text, achieving comparable performance to other approaches using only 20% of the training captions. In addition, the concreteness score, which emerges during the matching between constituents and images, correlates well with a similar measure defined by linguists. Finally, VG-NSL can be easily extended to multiple languages, which we evaluate on the Multi30K data set (Elliott et al., 2016, 2017) consisting of German and French image captions. 2 Related Work Linguistic structure induction from text. Recent work has proposed several approaches for inducing latent syntactic structures, including constituency trees (Choi et al., 2018; Yogatama et al., 2017; Maillard and Clark, 2018; Havrylov et al., 2019; Kim et al., 2019; Drozdov et al., 2019) and dependency trees (Shi et al., 2019), from the distant supervision of downstream tasks. However, most of the methods are not able to produce linguistically sound structures, or even consistent ones with fixed data and hyperparameters but different random initializations (Williams et al., 2018). A related line of research is to induce latent syntactic structure via language modeling. This approach has achieved remarkable performance on unsupervised constituency parsing (Shen et al., 2018a, 2019), especially in identifying the boundaries of higher-level (i.e., larger) constituents. To our knowledge, the Parsing-Reading-Predict Network (PRPN; Shen et al., 2018a) and the Ordered Neuron LSTM (ON-LSTM; Shen et al., 2019) currently produce the best fully unsupervised constituency parsing results. One issue with PRPN, however, is that it tends to produce meaningless parses for lower-level (smaller) constituents (Phu Mon Htut et al., 2018). Over the last two decades, there has been extensive study targeting unsupervised constituency parsing (Klein and Manning, 2002, 2004, 2005; Bod, 2006a,b; Ponvert et al., 2011) and dependency parsing (Klein and Manning, 2004; Smith and Eisner, 2006; Spitkovsky et al., 2010; Han et al., 2017). However, all of these approaches are based on linguistic annotations. Specifically, they operate on the part-of-speech tags of words instead of word tokens. One exception is Spitkovsky et al. (2011), which produces dependency parse trees based on automatically induced pseudo tags. In contrast to these existing approaches, we focus on inducing constituency parse trees with visual grounding. We use parallel data from another modality (i.e., paired images and captions), instead of linguistic annotations such as POS tags. We include a detailed comparison between some related works in the supplementary material. There has been some prior work on improving unsupervised parsing by leveraging extra signals, such as parallel text (Snyder et al., 2009), annotated data in another language with parallel text (Ganchev et al., 2009), annotated data in other languages without parallel text (Cohen et al., 2011), or non-parallel text from multiple languages (Cohen and Smith, 2009). We leave the integration of other grounding signals as future work. Grounded language acquisition. Grounded language acquisition has been studied for imagecaption data (Christie et al., 2016a), video-caption data (Siddharth et al., 2014; Yu et al., 2015), and visual reasoning (Mao et al., 2019). However, existing approaches rely on human labels or rules for classifying visual attributes or actions. Instead, our model induces syntax structures with no humandefined labels or rules. Meanwhile, learning visual-semantic representations in a joint embedding space (Ngiam et al., 2011) is a widely studied approach, and has achieved remarkable results on image-caption retrieval (Kiros et al., 2014; Faghri et al., 2018; Shi et al., 2018a), image caption generation (Kiros et al., 2014; Karpathy and Fei-Fei, 2015; Ma et al., 2015), and visual question answering (Malinowski et al., 2015). In this work, we borrow this idea to match visual and textual representations. Concreteness estimation. Turney et al. (2011) define concrete words as those referring to things, events, and properties that we can perceive directly with our senses. Subsequent work has studied 1844 Image Encoder Image A cat is on the ground Caption Structure and Representation Inference 𝒗(𝑖) 𝒄1 (𝑖) 𝒄2 (𝑖) 𝒄3 (𝑖) 𝒗(𝑖) Constituency Parse Tree Visual-Semantic Embeddings Embeddings of Constituents Image Embedding (Score-Sample-Combine) Figure 2: VG-NSL consists of two modules: a textual module for inferring structures and representations for captions, and a visual-semantic module for matching constituents with images. VG-NSL induces constituency parse trees of captions by looking at images and reading paired captions. word-level concreteness estimation based on text (Turney et al., 2011; Hill et al., 2013), human judgments (Silberer and Lapata, 2012; Hill and Korhonen, 2014a; Brysbaert et al., 2014), and multimodal data (Hill and Korhonen, 2014b; Hill et al., 2014; Kiela et al., 2014; Young et al., 2014; Hessel et al., 2018; Silberer et al., 2017; Bhaskar et al., 2017). As with Hessel et al. (2018) and Kiela et al. (2014), our model uses multi-modal data to estimate concreteness. Compared with them, we define concreteness for spans instead of words, and use it to induce linguistic structures. 3 Visually Grounded Neural Syntax Learner Given a set of paired images and captions, our goal is to learn representations and structures for words and constituents. Toward this goal, we propose the Visually Grounded Neural Syntax Learner (VGNSL), an approach for the grounded acquisition of syntax of natural language. VG-NSL is inspired by the idea of semantic bootstrapping (Pinker, 1984), which suggests that children acquire syntax by first understanding the meaning of words and phrases, and linking them with the syntax of words. At a high level (Figure 2), VG-NSL consists of 2 modules. First, given an input caption (i.e., a sentence or a smaller constituent), as a sequence of tokens, VG-NSL builds a latent constituency parse tree, and recursively composes representations for every constituent. Next, it matches textual representations with visual inputs, such as the paired image with the constituents. Both modules are jointly optimized from natural supervision: the model acquires constituency structures, composes textual representations, and links them with visual scenes, by looking at images and reading paired captions. 3.1 Textual Representations and Structures VG-NSL starts by composing a binary constituency structure of text, using an easy-first bottom-up parser (Goldberg and Elhadad, 2010). The composition of the tree from a caption of length n consists of n−1 steps. Let X(t) = (x(t) 1 , x(t) 2 , · · · , x(t) k ) denote the textual representations of a sequence of constituents after step t, where k = n −t. For simplicity, we use X(0) to denote the word embeddings for all tokens (the initial representations). At step t, a score function score(·; Θ), parameterized by Θ, is evaluated on all pairs of consecutive constituents, resulting in a vector score(X(t−1); Θ) of length n −t: score(X(t−1); Θ)j ≜score h x(t−1) j , x(t−1) j+1 i ; Θ  . We implement score(·; Θ) as a two-layer feedforward network. A pair of constituents  x(t−1) j∗ , x(t−1) j∗+1  is sampled from all pairs of consecutive constituents, with respect to the distribution produced by a softmax:2 Pr [j∗] = exp  score X(t−1); Θ  j∗  P j exp  score X(t−1); Θ  j . The selected pair is combined to form a single new constituent. Thus, after step t, the number of constituents is decreased by 1. The textual representation for the new constituent is defined as the L2normed sum of the two component constituents: combine  x(t−1) j∗ , x(t−1) j∗+1  ≜ x(t−1) j∗ + x(t−1) j∗+1 x(t−1) j∗ + x(t−1) j∗+1 2 . 2 At test time, we take the argmax. 1845 a cat is on the ground Step #1: 0.4 0.1 0.1 0.1 0.3 (a cat) is on the ground Step #2: 0.25 0.15 0.15 0.45 (a cat) is on (the ground) Step #3: 0.25 0.15 0.6 (a cat) is (on (the ground)) Step #4: 0.35 0.65 (a cat) (is (on (the ground))) Step #5: 1.0 ((a cat) (is (on (the ground)))) Figure 3: An illustration of how VG-NSL composes a constituency parse tree. At each step, the score function score is evaluated on all pairs of consecutive constituents (dashed lines). Next, a pair of constituents is sampled from all pairs w.r.t. a distribution computed by the softmax of all predicted scores. The selected pair of constituents is combined into a larger one, while the other constituents remain unchanged (solid lines). We find that using a more complex encoder for constituents, such as GRUs, will cause the representations to be highly biased towards a few salient words in the sentence (e.g., the encoder encodes only the word “cat” while ignoring the rest part of the caption; Shi et al., 2018a; Wu et al., 2019). This significantly degrades the performance of linguistic structure induction. We repeat this score-sample-combine process for n −1 steps, until all words in the input text have been combined into a single constituent (Figure 3). This ends the inference of the constituency parse tree. Since at each time step we combine two consecutive constituents, the derived tree t contains 2n −1 constituents (including all words). 3.2 Visual-Semantic Embeddings We follow an approach similar to that of Kiros et al. (2014) to define the visual-semantic embedding (VSE) space for paired images and text constituents. Let v(i) denote the vector representation of an image i, and c(i) j denote the vector representation of the j-th constituent of its corresponding text caption. During the matching with images, we ignore the tree structure and index them as a list of constituents. A function m(·, ·; Φ) is defined as the matching score between images and texts: m(v(i), c(i) j ; Φ) ≜cos(Φv, c) where the parameter vector Φ aligns the visual and textual representations into a joint space. 3.3 Training We optimize the visual-semantic representations (Φ) and constituency structures (Θ) in an alternating approach. At each iteration, given constituency parsing results of caption, Φ is optimized for matching the visual and the textual representations. Next, given the visual grounding of constituents, Θ is optimized for producing constituents that can be better matched with images. Specifically, we optimize textual representations and the visual-semantic embedding space using a hinge-based triplet ranking loss: L(Φ; V, C) = X i,k̸=i,j,ℓ h m(c(k) ℓ, v(i)) −m(c(i) j , v(i)) + δ i + + X i,k̸=i,j h m(c(i) j , v(k)) −m(c(i) j , v(i)) + δ i + , where i and k index over all image-caption pairs in the data set, while j and ℓenumerate all constituents of a specific caption (c(i) and c(k), respectively), V = {v(i)} is the set of image representations, C = {c(i) j } is the set of textual representations of all constituents, and δ is a constant margin, [·]+ denotes max(0, ·). The loss L extends the loss for image-caption retrieval of Kiros et al. (2014), by introducing the alignments between images and sub-sentence constituents. We optimize textual structures via distant supervision: they are optimized for a better alignment between the derived constituents and the images. Intuitively, the following objective encourages adjectives to be associated (combined) with the corresponding nouns, and verbs/prepositions to be associated (combined) with the corresponding subjects and objects. Specifically, we use REINFORCE (Williams, 1992) as the gradient estimator for Θ. Consider the parsing process of a specific caption c(i), and denote the corresponding image embedding v(i). For a constituent z of c(i), we define its 1846 (visual) concreteness concrete(z, v(i)) as: concrete(z, v(i)) = X k̸=i,p h m(z, v(i)) −m(c(k) p , v(i)) −δ′i + + X k̸=i h m(z, v(i)) −m(z, v(k)) −δ′i + , (1) where δ′ is a fixed margin. At step t, we define the reward function for a combination of a pair of constituents (x(t−1) j , x(t−1) j+1 ) as: r(x(t−1) j , x(t−1) j+1 ) = concrete(z, v(i)) (2) where z ≜combine(x(t−1) j , x(t−1) j+1 ). In plain words, at each step, we encourage the model to compose a constituent that maximizes the alignment between the new constituent and the corresponding image. During training, we sample constituency parse trees of captions, and reinforce each composition step using Equation 2. During test, no paired images of text are needed. 3.4 The Head-Initial Inductive Bias English and many other Indo-European languages are usually head-initial (Baker, 2001). For example, in verb phrases or prepositional phrases, the verb (or the preposition) precedes the complements (e.g., the object of the verb). Consider the simple caption a white cat on the lawn. While the association of the adjective (white) could be induced from the visual grounding of phrases, whether the preposition (on) should be associated with a white cat or the lawn is more challenging to induce. Thus, we impose an inductive bias to guide the learner to correctly associate prepositions with their complements, determiners with corresponding noun phrases, and complementizers with the corresponding relative clauses. Specifically, we discourage abstract constituents (i.e., constituents that cannot be grounded in the image) from being combined with a preceding constituent, by modifying the original reward definition (Equation 2) as: r′(x(t−1) j ,x(t−1) j+1 ) = r(x(t−1) j , x(t−1) j+1 ) λ · abstract(x(t−1) j+1 , v(i)) + 1 , (3) where λ is a scalar hyperparameter, v(i) is the image embedding corresponding to the caption being parsed, and abstract denotes the abstractness of the span, defined analogously to concreteness (Equation 1): abstract(z, v(i)) = X k̸=i,p h m(c(k) p , v(i)) −m(z, v(i)) + δ′i + + X k̸=i h m(z, v(k)) −m(z, v(i)) + δ′i + , The intuition here is that the initial heads for prepositional phrases (e.g., on) and relative clauses (e.g., which, where) are usually abstract words. During training, we encourage the model to associate these abstract words with the succeeding constituents instead of the preceding ones. It is worth noting that such an inductive bias is languagespecific, and cannot be applied to head-final languages such as Japanese (Baker, 2001). We leave the design of head-directionality inductive biases for other languages as future work. 4 Experiments We evaluate VG-NSL for unsupervised parsing in a few ways: F1 score with gold trees, selfconsistency across different choices of random initialization, performance on different types of constituents, and data efficiency. In addition, we find that the concreteness score acquired by VG-NSL is consistent with a similar measure defined by linguists. We focus on English for the main experiments, but also extend to German and French. 4.1 Data Sets and Metrics We use the standard split of the MSCOCO data set (Lin et al., 2014), following Karpathy and FeiFei (2015). It contains 82,783 images for training, 1,000 for development, and another 1,000 for testing. Each image is associated with 5 captions. For the evaluation of constituency parsing, the Penn Treebank (PTB; Marcus et al., 1993) is a widely used, manually annotated data set. However, PTB consists of sentences from abstract domains, e.g., the Wall Street Journal (WSJ), which are not visually grounded and whose linguistic structures can hardly be induced by VG-NSL. Here we evaluate models on the MSCOCO test set, which is well-matched to the training domain; we leave the extension of our work to more abstract domains to future work. We apply Benepar (Kitaev and Klein, 2018),3 an off-the-shelf constituency parser 3 https://pypi.org/project/benepar 1847 with state-of-the-art performance (95.52 F1 score) on the WSJ test set,4 to parse the captions in the MSCOCO test set as gold constituency parse trees. We evaluate all of the investigated models using the F1 score compared to these gold parse trees.5 4.2 Baselines We compare VG-NSL with various baselines for unsupervised tree structure modeling of texts. We can categorize the baselines by their training objective or supervision. Trivial tree structures. Similarly to recent work on latent tree structures (Williams et al., 2018; Phu Mon Htut et al., 2018; Shi et al., 2018b), we include three types of trivial baselines without linguistic information: random binary trees, left-branching binary trees, and right-branching binary trees. Syntax acquisition by language modeling and statistics. Shen et al. (2018a) proposes the Parsing-Reading-Predict Network (PRPN), which predicts syntactic distances (Shen et al., 2018b) between adjacent words, and composes a binary tree based on the syntactic distances to improve language modeling. The learned distances can be mapped into a binary constituency parse tree, by recursively splitting the sentence between the two consecutive words with the largest syntactic distance. Ordered neurons (ON-LSTM; Shen et al., 2019) is a recurrent unit based on the LSTM cell (Hochreiter and Schmidhuber, 1997) that explicitly regularizes different neurons in a cell to represent shortterm or long-term information. After being trained on the language modeling task, Shen et al. (2019) suggest that the gate values in ON-LSTM cells can be viewed as syntactic distances (Shen et al., 2018b) between adjacent words to induce latent tree structures. ON-LSTM has the state-of-the-art unsupervised constituency parsing performance on the WSJ test set. We train both PRPN and ONLSTM on all captions in the MSCOCO training set and use the models as baselines. Inspired by the syntactic distance–based approaches (Shen et al., 2018a, 2019), we also introduce another baseline, PMI, which uses negative 4 We also manually label the constituency parse trees for 50 captions randomly sampled from the MSCOCO test split, where Benepar has an F1 score of 95.65 with the manual labels. Details can be found in the supplementary material. 5 Following convention (Sekine and Collins, 1997), we report the F1 score across all constituents in the data set, instead of the average of sentence-level F1 scores. pointwise mutual information (Church and Hanks, 1990) between adjacent words as the syntactic distance. We compose constituency parse trees based on the distances in the same way as PRPN and ON-LSTM. Syntax acquisition from downstream tasks. Choi et al. (2018) propose to compose binary constituency parse trees directly from downstream tasks using the Gumbel softmax trick (Jang et al., 2017). We integrate a Gumbel tree-based caption encoder into the visual semantic embedding approach (Kiros et al., 2014). The model is trained on the downstream task of image-caption retrieval. Syntax acquisition from concreteness estimation. Since we apply concreteness information to train VG-NSL, it is worth comparing against unsupervised constituency parsing based on previous approaches for predicting word concreteness. This set of baselines includes semi-supervised estimation (Turney et al., 2011), crowdsourced labeling (Brysbaert et al., 2014), and multimodal estimation (Hessel et al., 2018). Note that none of these approaches has been applied to unsupervised constituency parsing. Implementation details can be found in the supplementary material. Based on the concreteness score of words, we introduce another baseline similar to VG-NSL. Specifically, we recursively combine two consecutive constituents with the largest average concreteness, and use the average concreteness as the score for the composed constituent. The algorithm generates binary constituency parse trees of captions. For a fair comparison, we implement a variant of this algorithm that also uses a head-initial inductive bias and include the details in the appendix. 4.3 Implementation Details Across all experiments and all models (including baselines such as PRPN, ON-LSTM, and Gumbel), the embedding dimension for words and constituents is 512. For VG-NSL, we use a pre-trained ResNet-101 (He et al., 2016), trained on ImageNet (Russakovsky et al., 2015), to extract vector embeddings for images. Thus, Φ is a mapping from a 2048-D image embedding space to a 512-D visualsemantic embedding space. As for the score function in constituency parsing, we use a hidden dimension of 128 and ReLU activation. All VG-NSL models are trained for 30 epochs. We use an Adam optimizer (Kingma and Ba, 2015) with initial learning rate 5 × 10−4 to train VG-NSL. The learning 1848 Model NP VP PP ADJP Avg. F1 Self F1 Random 47.3±0.3 10.5±0.4 17.3±0.7 33.5±0.8 27.1±0.2 32.4 Left 51.4 1.8 0.2 16.0 23.3 N/A Right 32.2 23.4 18.7 14.4 22.9 N/A PMI 54.2 16.0 14.3 39.2 30.5 N/A PRPN (Shen et al., 2018a) 72.8±9.7 33.0±9.1 61.6±9.9 35.4±4.3 52.5±2.6 60.3 ON-LSTM (Shen et al., 2019) 74.4±7.1 11.8±5.6 41.3±16.4 44.0±14.0 45.5±3.3 69.3 Gumbel (Choi et al., 2018)† 50.4±0.3 8.7±0.3 15.5±0.0 34.8±1.6 27.9±0.2 40.1 VG-NSL (ours)† 79.6±0.4 26.2±0.4 42.0±0.6 22.0±0.4 50.4±0.3 87.1 VG-NSL+HI (ours)† 74.6±0.5 32.5±1.5 66.5±1.2 21.7±1.1 53.3±0.2 90.2 VG-NSL+HI+FastText (ours)*† 78.8±0.5 24.4±0.9 65.6±1.1 22.0±0.7 54.4±0.4 89.8 Concreteness estimation–based models Turney et al. (2011)* 65.5 30.8 35.3 30.4 42.5 N/A Turney et al. (2011)+HI* 74.5 26.2 47.6 25.6 48.9 N/A Brysbaert et al. (2014)* 54.1 27.8 27.0 33.1 34.1 N/A Brysbaert et al. (2014)+HI* 73.4 23.9 50.0 26.1 47.9 N/A Hessel et al. (2018)† 50.9 21.7 32.8 27.5 33.2 N/A Hessel et al. (2018)+HI† 72.5 34.4 65.8 26.2 52.9 N/A Table 1: Recall of specific typed phrases, and overall F1 score, evaluated on the MSCOCO test split, averaged over 5 runs with different random initializations. We also include self-agreement F1 score (Williams et al., 2018) across the 5 runs. ± denotes standard deviation. * denotes models requiring extra labels and/or corpus, and † denotes models requiring a pre-trained visual feature extractor. We highlight the best number in each column among all models that do not require extra data other than paired image-caption data, as well as the overall best number. The Left, Right, PMI, and concreteness estimation–based models have no standard deviation or self F1 (shown as N/A) as they are deterministic given the training and/or testing data. rate is re-initialized to 2.5 × 10−4 after 15 epochs. We tune other hyperparameters of VG-NSL on the development set using the self-agreement F1 score (Williams et al., 2018) over 5 runs with different choices of random initialization. 4.4 Results: Unsupervised Constituency Parsing We evaluate the induced constituency parse trees via the overall F1 score, as well as the recall of four types of constituents: noun phrases (NP), verb phrases (VP), prepositional phrases (PP), and adjective phrases (ADJP) (Table 1). We also evaluate the robustness of models trained with fixed data and hyperparameters, but different random initialization, in two ways: via the standard deviation of performance across multiple runs, and via the selfagreement F1 score (Williams et al., 2018), which is the average F1 taken over pairs of different runs. Among all of the models which do not require extra labels, VG-NSL with the head-initial inductive bias (VG-NSL+HI) achieves the best F1 score. PRPN (Shen et al., 2018a) and a concreteness estimation-based baseline (Hessel et al., 2018) both produce competitive results. It is worth noting that the PRPN baseline reaches this performance without any information from images. However, the performance of PRPN is less stable than that of VG-NSL across random initializations. In contrast to its state-of-the-art performance on the WSJ full set (Shen et al., 2019), we observe that ON-LSTM does not perform well on the MSCOCO caption data set. However, it remains the best model for adjective phrases, which is consistent with the result reported by Shen et al. (2019). In addition to the best overall F1 scores, VGNSL+HI achieves competitive scores across most phrase types (NP, VP and PP). Our models (VGNSL and VG-NSL+HI) perform the best on NP and PP, which are the most common visually grounded phrases in the MSCOCO data set. In addition, our models produce much higher self F1 than the baselines (Shen et al., 2018a, 2019; Choi et al., 2018), showing that they reliably produce reasonable constituency parse trees with different initialization. We also test the effectiveness of using pretrained word embeddings. Specifically, for VGNSL+HI+FastText, we use a pre-trained FastText 1849 0 20 40 60 80 100 Percentage (%) 30 35 40 45 50 F1 with gold trees PRPN ON-LSTM VG-NSL VG-NSL+HI (a) The percent data-F1 curves. 0 20 40 60 80 100 Percentage (%) 40 50 60 70 80 90 self F1 PRPN ON-LSTM VG-NSL VG-NSL+HI (b) The percent data-self F1 curves. Figure 4: F1 score and self F1 score with respect to the amount of training data. All numbers are averaged over 5 runs with different random initialization. embedding (300-D, Joulin et al., 2016), concatenated with a 212-D trainable embedding, as the word embedding. Using pre-trained word embeddings further improves performance to an average F1 of 54.4% while keeping a comparable self F1. 4.5 Results: Data Efficiency We compare the data efficiency for PRPN (the strongest baseline method), ON-LSTM, VG-NSL, and VG-NSL+HI. We train the models using 1%, 2%, 5%, 10%, 20%, 50% and 100% of the MSCOCO training set, and report the overall F1 and self F1 scores on the test set (Figure 4). Compared to PRPN trained on the full training set, VG-NSL and VG-NSL+HI reach comparable performance using only 20% of the data (i.e., 8K images with 40K captions). VG-NSL tends to quickly become more stable (in terms of the self F1 score) as the amount of data increases, while PRPN and ON-LSTM remain less stable. 4.6 Analysis: Consistency with Linguistic Concreteness During training, VG-NSL acquires concreteness estimates for constituents via Equation 1. Here, we evaluate the consistency between word-level concreteness estimates induced by VG-NSL and those produced by other methods (Turney et al., 2011; Brysbaert et al., 2014; Hessel et al., 2018). Specifically, we measure the correlation between the conModel/method VG-NSL (+HI) Turney et al. (2011) 0.74 0.72 Brysbaert et al. (2014) 0.71 0.71 Hessel et al. (2018) 0.84 0.85 Table 2: Agreement between our concreteness estimates and existing models or labels, evaluated via the Pearson correlation coefficient computed over the most frequent 100 words in the MSCOCO test set, averaged over 5 runs with different random initialization. Model Criterion Avg. F1 Self F1 VG-NSL Self F1 50.4 ±0.3 87.1 VG-NSL R@1 47.7 ±0.6 83.4 VG-NSL+HI Self F1 53.3 ±0.2 90.2 VG-NSL+HI R@1 53.1 ±0.2 88.7 Table 3: Average F1 scores and Self F1 scores of VGNSL and VG-NSL+HI with different model selection methods. R@1 denotes using recall at 1 (Kiros et al., 2014) as the model selection criterion. All hyperparameters are tuned with respect to self-agreement F1 score. The numbers are comparable to those in Table 1. creteness estimated by VG-NSL on MSCOCO test set and existing linguistic concreteness definitions (Table 2). For any word, of which the representation is z, we estimate its concreteness by taking the average of concrete(z, v(i)), across all associated images v(i). The high correlation between VG-NSL and the concreteness scores produced by Turney et al. (2011) and Brysbaert et al. (2014) supports the argument that the linguistic concept of concreteness can be acquired in an unsupervised way. Our model also achieves a high correlation with Hessel et al. (2018), which also estimates word concreteness based on visual-domain information. 4.7 Analysis: Self-Agreement F1 Score as the Criterion for Model Selection We introduce a novel hyperparameter tuning and model selection method based on the selfagreement F1 score. Let M(i,j) H denote the j-th checkpoint of the ith model trained with hyperparameters H, where M(i1,·) H and M(i2,·) H differ in their random initialization. The hyperparameters H are tuned to maximize: X 1≤i<k≤N max |ji−jk|<δ F1  M(i,ji) H , M(k,jk) H  , where F1(·, ·) denotes the F1 score between the trees generated by two models, N the number of 1850 Model EN DE FR PRPN 30.8 ±17.9 31.5 ±8.9 27.5 ±7.0 ON-LSTM 38.7 ±12.7 34.9 ±12.3 27.7 ±5.6 VG-NSL 33.5 ±0.2 36.3 ±0.2 34.3 ±0.6 VG-NSL+HI 38.7 ±0.2 38.3 ±0.2 38.1 ±0.6 Table 4: F1 scores on the Multi30K test split (Young et al., 2014; Elliott et al., 2016, 2017), averaged over 5 runs with different random initialization. ± denotes the standard deviation. different runs, and δ the margin to ensure only nearby checkpoints are compared.6 After finding the best hyperparameters H0, we train the model for another N times with different random initialization, and select the best models by arg max {jℓ}N ℓ=1 X 1≤i<k≤N F1  M(i,ji) H0 , M(k,jk) H0  . We compare the performance of VG-NSL selected by the self F1 score and that selected by recall at 1 in image-to-text retrieval (R@1 in Table 3; Kiros et al., 2014). As a model selection criterion, self F1 consistently outperforms R@1 (avg. F1: 50.4 vs. 47.7 and 53.3 vs. 53.1 for VG-NSL and VG-NSL+HI, respectively). Meanwhile, it is worth noting that even if we select VG-NSL by R@1, it shows better stability compared with PRPN and ON-LSTM (Table 1), in terms of the score variance across different random initialization and self F1. Specifically, the variance of avg. F1 is always less than 0.6 while the self F1 is greater than 80. Note that the PRPN and ON-LSTM models are not tuned using self F1, since these models are usually trained for hundreds or thousands of epochs and thus it is computationally expensive to evaluate self F1. We leave the efficient tuning of these baselines by self F1 as a future work. 4.8 Extension to Multiple Languages We extend our experiments to the Multi30K data set, which is built on the Flickr30K data set (Young et al., 2014) and consists of English, German (Elliott et al., 2016), and French (Elliott et al., 2017) captions. For Multi30K, there are 29,000 images in the training set, 1,014 in the development set and 1,000 in the test set. Each image is associated with one caption in each language. We compare our models to PRPN and ONLSTM in terms of overall F1 score (Table 4). VGNSL with the head-initial inductive bias consis6 In all of our experiments, N = 5, δ = 2. tently performs the best across the three languages, all of which are highly head-initial (Baker, 2001). Note that the F1 scores here are not comparable to those in Table 1, since Multi30K (English) has 13x fewer captions than MSCOCO. 5 Discussion We have proposed a simple but effective model, the Visually Grounded Neural Syntax Learner, for visually grounded language structure acquisition. VG-NSL jointly learns parse trees and visually grounded textual representations. In our experiments, we find that this approach to grounded language learning produces parsing models that are both accurate and stable, and that the learning is much more data-efficient than a state-of-the-art text-only approach. Along the way, the model acquires estimates of word concreteness. The results suggest multiple future research directions. First, VG-NSL matches text embeddings directly with embeddings of entire images. Its performance may be boosted by considering structured representations of both images (e.g., Lu et al., 2016; Wu et al., 2019) and texts (Steedman, 2000). Second, thus far we have used a shared representation for both syntax and semantics, but it may be useful to disentangle their representations (Steedman, 2000). Third, our best model is based on the head-initial inductive bias. Automatically acquiring such inductive biases from data remains challenging (Kemp et al., 2006; Gauthier et al., 2018). Finally, it may be possible to extend our approach to other linguistic tasks such as dependency parsing (Christie et al., 2016b), coreference resolution (Kottur et al., 2018), and learning pragmatics beyond semantics (Andreas and Klein, 2016). There are also limitations to the idea of grounded language acquisition. In particular, the current approach has thus far been applied to understanding grounded texts in a single domain (static visual scenes for VG-NSL). Its applicability could be extended by learning shared representations across multiple modalities (Castrejon et al., 2016) or integrating with pure text-domain models (such as PRPN, Shen et al., 2018a). Acknowledgement We thank Allyson Ettinger for helpful suggestions on this work, and the anonymous reviewers for their valuable feedback. 1851 References Jacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In Proc. of EMNLP. Mark C. Baker. 2001. The Atoms of Language: The Mind’s Hidden Rules of Grammar. Basic books. Sai Abishek Bhaskar, Maximilian K¨oper, Sabine Schulte Im Walde, and Diego Frassinelli. 2017. Exploring multi-modal text+image models to distinguish between abstract and concrete nouns. In Proc. of the IWCS workshop on Foundations of Situated and Multimodal Communication. Ann Bies, Mark Ferguson, Karen Katz, Robert MacIntyre, Victoria Tredinnick, Grace Kim, Mary Ann Marcinkiewicz, and Britta Schasberger. 1995. Bracketing guidelines for treebank II style Penn treebank project. University of Pennsylvania, 97. Rens Bod. 2006a. An all-subtrees approach to unsupervised parsing. In Proc. of COLING-ACL. Rens Bod. 2006b. Unsupervised parsing with U-DOP. In Proc. of CoNLL. Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2014. Concreteness ratings for 40 thousand generally known English word lemmas. Behav. Res. Methods, 46(3):904–911. Lluis Castrejon, Yusuf Aytar, Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. 2016. Learning aligned cross-modal representations from weakly aligned data. In Proc. of CVPR. Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2018. Learning to compose task-specific tree structures. In Proc. of AAAI. Gordon Christie, Ankit Laddha, Aishwarya Agrawal, Stanislaw Antol, Yash Goyal, Kevin Kochersberger, and Dhruv Batra. 2016a. Resolving language and vision ambiguities together: Joint segmentation & prepositional attachment resolution in captioned scenes. In Proc. of EMNLP. Gordon Christie, Ankit Laddha, Aishwarya Agrawal, Stanislaw Antol, Yash Goyal, Kevin Kochersberger, and Dhruv Batra. 2016b. Resolving language and vision ambiguities together: Joint segmentation & prepositional attachment resolution in captioned scenes. In Proc. of EMNLP. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Comput. Linguist., 16(1):22–29. Shay Cohen and Noah A. Smith. 2009. Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction. In Proc. of NAACLHLT. Shay B. Cohen, Dipanjan Das, and Noah A. Smith. 2011. Unsupervised structure prediction with nonparallel multilingual guidance. In Proc. of EMNLP. Max Coltheart. 1981. The MRC psycholinguistic database. Q. J. Exp. Psychol., 33(4):497–505. Andrew Drozdov, Pat Verga, Mohit Yadav, Mohit Iyyer, and Andrew McCallum. 2019. Unsupervised latent tree induction with deep inside-outside recursive autoencoders. In Proc. of NAACL-HLT. Desmond Elliott, Stella Frank, Lo¨ıc Barrault, Fethi Bougares, and Lucia Specia. 2017. Findings of the second shared task on multimodal machine translation and multilingual image description. In Proc. of WMT. Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. 2016. Multi30K: Multilingual EnglishGerman image descriptions. In Proc. of the 5th Workshop on Vision and Language. Fartash Faghri, David J. Fleet, Jamie Ryan Kiros, and Sanja Fidler. 2018. VSE++: Improving visualsemantic embeddings with hard negatives. In Proc. of BMVC. Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency grammar induction via bitext projection constraints. In Proc. of ACLIJCNLP. Jon Gauthier, Roger Levy, and Joshua B. Tenenbaum. 2018. Word learning and the acquisition of syntactic–semantic overhypotheses. In Proc. of CogSci. Yoav Goldberg and Michael Elhadad. 2010. An efficient algorithm for easy-first non-directional dependency parsing. In Proc. of NAACL-HLT. Wenjuan Han, Yong Jiang, and Kewei Tu. 2017. Dependency grammar induction with neural lexicalization and big training data. In Proc. of EMNLP. Serhii Havrylov, Germ´an Kruszewski, and Armand Joulin. 2019. Cooperative learning of disjoint syntax and semantics. In Proc. of NAACL-HLT. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proc. of CVPR. Jack Hessel, David Mimno, and Lillian Lee. 2018. Quantifying the visual concreteness of words and topics in multimodal datasets. In Proc. of NAACLHLT. Felix Hill, Douwe Kiela, and Anna Korhonen. 2013. Concreteness and corpora: A theoretical and practical study. In Proc. of CMCL. Felix Hill and Anna Korhonen. 2014a. Concreteness and subjectivity as dimensions of lexical meaning. In Proc. of ACL. 1852 Felix Hill and Anna Korhonen. 2014b. Learning abstract concept embeddings from multi-modal data: Since you probably cant see what i mean. In Proc. of EMNLP. Felix Hill, Roi Reichart, and Anna Korhonen. 2014. Multi-modal models for concrete and abstract concept meaning. TACL, 2(1):285–296. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with Gumbel-softmax. In Proc. of ICLR. Fred Jelinek, Robert L Mercer, Lalit R Bahl, and James K Baker. 1977. Perplexity—a measure of the difficulty of speech recognition tasks. The Journal of the Acoustical Society of America, 62(S1):S63– S63. Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, H´erve J´egou, and Tomas Mikolov. 2016. FastText.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proc. of CVPR. Charles K. Kemp, Amy Perfors, and Joshua B. Tenenbaum. 2006. Learning overhypotheses. In Proc. of CogSci. Douwe Kiela, Felix Hill, Anna Korhonen, and Stephen Clark. 2014. Improving multi-modal representations using image dispersion: Why less is sometimes more. In Proc. of ACL. Yoon Kim, Alexander M Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and G´abor Melis. 2019. Unsupervised recurrent neural network grammars. In Proc. of NAACL-HLT. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. of ICLR. Ryan Kiros, Ruslan Salakhutdinov, and Richard S. Zemel. 2014. Unifying visual-semantic embeddings with multimodal neural language models. arXiv:1411.2539. Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proc. of ACL. Dan Klein and Christopher D. Manning. 2002. A generative constituent-context model for improved grammar induction. In Proc. of ACL. Dan Klein and Christopher D. Manning. 2004. Corpusbased induction of syntactic structure: Models of dependency and constituency. In Proc. of ACL. Dan Klein and Christopher D. Manning. 2005. Natural language grammar induction with a generative constituent-context model. Pattern Recognition, 38(9):1407–1419. Satwik Kottur, Jos´e MF Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. 2018. Visual coreference resolution in visual dialog using neural module networks. In Proc. of ECCV. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C. Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In Proc. of ECCV. Cewu Lu, Ranjay Krishna, Michael Bernstein, and Li Fei-Fei. 2016. Visual relationship detection with language priors. In Proc. of ECCV. Lin Ma, Zhengdong Lu, Lifeng Shang, and Hang Li. 2015. Multimodal convolutional neural networks for matching image and sentence. In Proc. of CVPR. Jean Maillard and Stephen Clark. 2018. Latent tree learning with differentiable parsers: Shift-reduce rarsing and chart parsing. In Proc. of the Workshop on the Relevance of Linguistic Structure in Neural Architectures for NLP. Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. 2015. Ask your neurons: A neural-based approach to answering questions about images. In Proc. of ICCV. Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B Tenenbaum, and Jiajun Wu. 2019. The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. In Proc. of ICLR. Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn treebank. Comput. Linguist., 19(2). Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y. Ng. 2011. Multimodal deep learning. In Proc. of ICML. Phu Mon Htut, Kyunghyun Cho, and Samuel R. Bowman. 2018. Grammar induction with neural language models: An unsual replication. In Proc. of EMNLP. Steven Pinker. 1984. Language Learnability and Language Development. Cambridge University Press. Elias Ponvert, Jason Baldridge, and Katrin Erk. 2011. Simple unsupervised grammar induction from raw rext with cascaded finite state models. In Proc. of ACL. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet large scale visual recognition challenge. IJCV, 115(3):211–252. 1853 Satoshi Sekine and Michael Collins. 1997. Evalb bracket scoring program. Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. 2018a. Neural language modeling by jointly learning syntax and lexicon. In Proc. of ICLR. Yikang Shen, Zhouhan Lin, Athul Paul Jacob, Alessandro Sordoni, Aaron Courville, and Yoshua Bengio. 2018b. Straight to the tree: Constituency parsing with neural syntactic distance. In Proc. of ACL. Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2019. Ordered neurons: Integrating tree structures into recurrent neural networks. In Proc. of ICLR. Haoyue Shi, Jiayuan Mao, Tete Xiao, Yuning Jiang, and Jian Sun. 2018a. Learning visually-grounded semantics from contrastive adversarial samples. In Proc. of COLING. Haoyue Shi, Hao Zhou, Jiaze Chen, and Lei Li. 2018b. On tree-based neural sentence modeling. In Proc. of EMNLP. Jiaxin Shi, Lei Hou, Juanzi Li, Zhiyuan Liu, and Hanwang Zhang. 2019. Learning to embed sentences using attentive recursive trees. In Proc. of AAAI. N. Siddharth, Andrei Barbu, and Jeffrey Mark Siskind. 2014. Seeing what you’re told: Sentence-guided activity recognition in video. In Proc. of CVPR. Carina Silberer, Vittorio Ferrari, and Mirella Lapata. 2017. Visually grounded meaning representations. TPAMI, 39(11):2284–2297. Carina Silberer and Mirella Lapata. 2012. Grounded models of semantic representation. In Proc. of EMNLP-CoNLL. Noah A. Smith and Jason Eisner. 2006. Annealing structural bias in multilingual weighted grammar induction. In Proc. of COLING-ACL. Benjamin Snyder, Tahira Naseem, and Regina Barzilay. 2009. Unsupervised multilingual grammar induction. In Proc. of ACL-IJCNLP. Valentin I. Spitkovsky, Hiyan Alshawi, Angel X. Chang, and Daniel Jurafsky. 2011. Unsupervised dependency parsing without gold part-of-speech tags. In Proc. of EMNLP. Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2010. From baby steps to leapfrog: How “less is more” in unsupervised dependency parsing. In Proc. of NAACL-HLT. Mark Steedman. 2000. The Syntactic Process. MIT press Cambridge, MA. Peter D. Turney, Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense identification through concrete and abstract context. In Proc. of EMNLP. Adina Williams, Andrew Drozdov, and Samuel R. Bowman. 2018. Do latent tree learning models identify meaningful structure in sentences? TACL, 6:253–267. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Mach. Learn., 8(3-4):229–256. Hao Wu, Jiayuan Mao, Yufeng Zhang, Weiwei Sun, Yuning Jiang, Lei Li, and Wei-Ying Ma. 2019. Unified Visual-Semantic Embeddings: Bridging Vision and Language with Structured Meaning Representations. In Proc. of CVPR. Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. 2017. Learning to compose words into sentences with reinforcement learning. In Proc. of ICLR. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. TACL, 2:67–78. Haonan Yu, N. Siddharth, Andrei Barbu, and Jeffrey Mark Siskind. 2015. A compositional framework for grounding language inference, generation, and acquisition in video. JAIR, 52:601–713. Supplementary Material The supplementary material is organized as follows. First, in Section A, we summarize and compare existing models for constituency parsing without explicit syntactic supervision. Next, in Section B, we present more implementation details of VG-NSL. Third, in Section C, we present the implementation details for all of our baseline models. Fourth, in Section D, we present the evaluation details of Benepar (Kitaev and Klein, 2018) on the MSCOCO data set. Fifth, in Section E, we qualitatively and quantitatively compare the concreteness scores estimated or labeled by different methods. Finally, in Section F, we show sample trees generated by VG-NSL on the MSCOCO test set. A Overview of Models for Constituency Parsing without Explicit Syntactic Supervision Shown in Table 5, we compare existing models for constituency parsing without explicit syntactic supervision, with respect to their learning objective, dependence on extra labels or extra corpus, and other features. The table also includes the analysis of previous works on parsing sentences based on gold part-of-speech tags. 1854 Model Objective Extra Label MultiStochastic Extra modal Corpus CCM (Klein and Manning, 2002)* MAP POS    DMV-CCM (Klein and Manning, 2005)* MAP POS    U-DOP (Bod, 2006b)* Probability Estimation POS    UML-DOP (Bod, 2006a)* MAP POS    PMI N/A     Random N/A     Left N/A     Right N/A     PRPN (Shen et al., 2018a) LM     ON-LSTM (Shen et al., 2019) LM     Gumbel softmax(Choi et al., 2018) Cross-modal Retrieval     VG-NSL (ours) Cross-modal Retrieval     VG-NSL+HI (ours) Cross-modal Retrieval     Concreteness estimation based models Turney et al. (2011)* N/A Concreteness (Partial)    Turney et al. (2011)+HI* N/A Concreteness (Partial)    Brysbaert et al. (2014)* N/A Concreteness (Full)    Brysbaert et al. (2014)+HI* N/A Concreteness (Full)    Hessel et al. (2018) N/A     Hessel et al. (2018)+HI N/A     Table 5: Comparison of models for constituency parsing without explicit syntactic supervision. * denotes models requiring extra labels, such as POS tags or manually labeled concreteness scores. All multimodal methods listed in the table require a pretrained visual feature extractor (i.e., ResNet-101; He et al., 2016). A model is labeled as stochastic if for fixed training data and hyperparameters the model may produce different results (e.g., due to different choices of random initialization). To the best of our knowledge, results on concreteness estimation (Turney et al., 2011; Brysbaert et al., 2014; Hessel et al., 2018) have not been applied to unsupervised parsing so far. aaa ! ! ! bb b " " " QQ   ZZ   @ @ A cat is on the ground (a) Left-branching tree. HH H    A bb b " " " cat bb b " " " is bb b " " " on ZZ   the ground (b) Right-branching tree. Figure 5: Examples of some trivial tree structures. B Implementation Details for VG-NSL We adopt the code released by Faghri et al. (2018)7 as the visual-semantic embedding module for VGNSL. Following them, we fix the margin δ to 0.2. We also use the vocabulary provided by Faghri et al. 7https://github.com/fartashf/vsepp (2018),8 which contains 10,000 frequent words in the MSCOCO data set. Out-of-vocabulary words are treated as unseen words. For either VG-NSL or baselines, we use the same vocabulary if applicable. 8http://www.cs.toronto.edu/˜faghri/ vsepp/vocab.tar 1855 Hyperparameter tuning. As stated in main text, we use the self-agreement F1 score (Williams et al., 2018) as an unsupervised signal for tuning all hyperparamters. Besides the learning rate and other conventional hyperparameters, we also tune λ, the hyperparameter for the head-initial bias model. λ indicates the weight of penalization for “right abstract constituents”. We choose λ from {1, 2, 5, 10, 20, 50, 100} and found that λ = 20 gives the best self-agreement F1 score. C Implementation Details for Baselines Trivial tree structures. We show examples for left-branching binary trees and right-branching binary trees in Figure 5. As for binary random trees, we iteratively combine two randomly selected adjacent constituents. This procedure is similar to that shown in Algorithm 2. Parsing-Reading-Predict Network (PRPN). We use the code released by Shen et al. (2018a) to train PRPN.9 We tune the hyperparameters with respect to language modeling perplexity (Jelinek et al., 1977). For a fair comparison, we fix the hidden dimension of all hidden layers of PRPN as 512. We use an Adam optimizer (Kingma and Ba, 2015) to optimize the parameters. The tuned parameters are number of layers (1, 2, 3) and learning rate (1 × 10−3, 5 × 10−4, 2 × 10−4). The models are trained for 100 epochs on the MSCOCO dataset and 1,000 epochs on the Multi30K dataset, and are early stopped using the criterion of language model perplexity. Ordered Neurons (ON-LSTM). We use the code release by Shen et al. (2019) to train ONLSTM.10 We tune the hyperparameters with respect to language modeling perplexity (Jelinek et al., 1977), and use perplexity as an early stopping criterion. For a fair comparison, the hidden dimension of all hidden layers is set to 512, and the chunk size is changed to 16 to fit the hidden layer size. Following the original paper (Shen et al., 2019), we set the number of layers to be 3, and report the constituency parse tree with respect to the gate values output by the second layer of ON-LSTM. In order to obtain a better perplexity, we explore both Adam (Kingma and Ba, 2015) and SGD as the optimizer. We tune the learning rate (1 × 10−3, 9https://github.com/yikangshen/PRPN 10https://github.com/yikangshen/ Ordered-Neurons Algorithm 1: Constituency parsing based on given syntactic distance. Input: text length m, list of syntactic distances d = (d1, d2, . . . , dm−1) Output: Boundaries of constituents B = {(Li, Ri)}i=1,...,m−1 B = parse(d, 1, m) Function parse(d, left, right) if left = right then return EmptySet end p = arg maxj∈[left,right-1] dj boundaries = union( {(left, right)}, parse (d, left, left + p), parse (d, left+p + 1, right) ) return boundaries 5 × 10−4, 2 × 10−4 for Adam, and 0.1, 1, 10, 30 for SGD). The models are trained for 100 epochs on the MSCOCO dataset and 1,000 epochs on the Multi30K dataset, and are early stopped using the criterion of language model perplexity. PMI based constituency parsing. We estimate the pointwise mutual information (PMI; Church and Hanks, 1990) between two words using all captions in MSCOCO training set. We apply negative PMI as syntactic distance (Shen et al., 2018b) to generate a binary constituency parse tree recursively. The method of constituency parsing with a given list of syntactic distances is shown in Algorithm 1. Gumbel-softmax based latent tree. We integrate Gumbel-softmax latent tree based text encoder (Choi et al., 2018)11 to the visual semantic embedding framework (Faghri et al., 2018), and use the tree structure produced by it as a baseline. Concreteness estimation. For the semisupervised concreteness estimation, we reproduce the experiments by Turney et al. (2011), applying the manually labeled concreteness scores for 4,295 words from the MRC Psycholinguistic Database Machine Usable Dictionary (Coltheart, 1981) as supervision,12 and use English Wikipedia pages 11https://github.com/jihunchoi/ unsupervised-treelstm 12http://ota.oucs.ox.ac.uk/headers/1054. xml 1856 Turney et al. (2011) Brysbaert et al. (2014) Hessel et al. (2018) VG-NSL+HI Turney et al. (2011) 1.00 0.84 0.58 0.72 Brysbaert et al. (2014) 0.84 1.00 0.55 0.71 Hessel et al. (2018) 0.58 0.55 1.00 0.85 VG-NSL+HI 0.72 0.71 0.85 1.00 Table 6: Pearson correlation coefficients between existing concreteness estimation methods, including baselines and VG-NSL+HI. In order to make a fair comparison, the correlation coefficients are evaluated on the 100 most frequent words on MSCOCO test set. -1.5 -1 -0.5 0 0.5 1 Turney et al., 2011 Brysbaert et al., 2014 Hessel et al., 2018 VG-NSL+HI (ours) cat on ground while young wood who wet Figure 6: Normalized concreteness scores of example words. to estimate PMI between words.13 The PMI is then used to compute similarity between seen and unseen words, which is further used as weights to estimate concreteness for unseen words. For the concreteness scores from crowdsourcing, we use the released data set of Brysbaert et al. (2014).14 Similarly to VG-NSL, the multimodal concreteness score (Hessel et al., 2018) is also estimated on the MSCOCO training set, using an open-sourced implementation.15 Constituency parsing with concreteness scores. Denote α(w) as the concreteness score estimated by a model for the word w. Given a sequence of concreteness scores of caption tokens denoted by (α(w1), α(w2), . . . , α(wm)), we aim to produce a binary constituency parse tree. We first normalize the concreteness scores to the range of [−1, 1], via:16 α′(wi) = 2  α(wi) −maxj α(wj)−minj α(wj) 2  maxj α(wj) −minj α(wj) . We treat unseen words (i.e., out-of-vocabulary words) in the same way in VG-NSL, by assigning 13https://dumps.wikimedia.org/other/ static_html_dumps/April_2007/en/ 14http://crr.ugent.be/archives/1330 15https://github.com/victorssilva/ concreteness 16 For the concreteness scores estimated by Hessel et al. (2018), we let α(w) = log α(w) before normalizing, as the original scores are in the range of (0, +∞). the concreteness of −1 to unseen words, with the assumption that unseen words are the most abstract ones. We compose constituency parse trees using the normalized concreteness scores by iteratively combining consecutive constituents. At each step, we select two adjacent constituents (initially, words) with the highest average concreteness score and combine them into a larger constituent, of which the concreteness is the average of its children. We repeat the above procedure until there is only one constituent left. As for the head-initial inductive bias, we weight the concreteness of the right constituent with a hyperparemeter τ > 1 when ranking all pairs of consecutive constituents during selection. Meanwhile, the concreteness of the composed constituent remains the average of the two component constituents. In order to keep consistent with VG-NSL, we set τ = 20 in all of our experiments. The procedure is summarized in Algorithm 2. D Details of Manual Ground Truth Evaluation It is important to confirm that the constituency parse trees of the MSCOCO captions produced by Benepar (Kitaev and Klein, 2018) are of high enough qualities, so that they can serve as reliable ground truth for further evaluation of other models. To verify this, we randomly sample 50 captions 1857 XXXXX X       PPPP     Three white sinks PPPP P      in XXXXX      QQ   a bathroom bb b " " " under mirrors (a) Constituency parse tree labeled by Benepar (Kitaev and Klein, 2018). hhhhhhhhhh   ( ( ( ( ( ( ( ( ( ( PPPP     Three white sinks QQ   in QQ   a bathroom bb b " " " under mirrors (b) Manually labeled constituency parse tree. Figure 7: A failure example by Benepar, where it fails to parse the noun phrase “three white sinks in a bathroom under mirrors” – according to human commonsense, it is much more common for sinks, rather than a bathroom, to be under mirrors. However, most of the constituents (e.g., “three white sinks” and “under mirrors”) are still successfully extracted by Benepar. Algorithm 2: Constituency parsing based on concreteness estimation. Input: list of normalized concreteness scores a = (a1, a2, . . . , am), hyperparameter τ Output: Boundaries of constituents B = {(Li, Ri)}i=1,...,m−1 for j = 1 to m do leftj = j rightj = j end while len(a) > 1 do p = arg maxj aj + τaj+1 add (leftp, rightp+1) to B a = a<p + (ap+ap+1 2 ) + a>p+1 left = left<p + (leftp) + left>p+1 right = right<p + (rightp+1) + right>p+1 end from the MSCOCO test split, and manually label the constituency parse trees without reference to either Benepar or the paired images, following the principles by Bies et al. (1995) as much as possible.17 Note that we only label the tree structures 17 The manually labeled constituency parse trees are publicly available at https://ttic.uchicago.edu/ ˜freda/vgnsl/manually_labeled_trees.txt without constituency labels (e.g., NP and PP). Most failure cases by Benepar are related to human commonsense in resolving parsing ambiguities, e.g., prepositional phrase attachments (Figure 7). We compare the manually labeled trees and those produced by Benepar (Kitaev and Klein, 2018), and find that the F1 score between them are 95.65. E Concreteness by Different Models E.1 Correlation between Different Concreteness Estimations We report the correlation of different methods for concreteness estimation, shown in (Table 6). The concreteness given by Turney et al. (2011) and Brysbaert et al. (2014) highly correlate with each other. The concreteness scores estimated on multimodal dataset (Hessel et al., 2018) also moderately correlates with the aforementioned two methods (Turney et al., 2011; Brysbaert et al., 2014). Compared to the concreteness estimated by Hessel et al. (2018), the one estimated by our model has a stronger correlation with the scores estimated from linguistic data (Turney et al., 2011; Brysbaert et al., 2014). 1858 E.2 Concreteness Scores of Sample Words by Different Methods We present the concreteness scores estimated or labeled by different methods in Figure 6, which qualitatively shows that different methods correlate with others well. F Sample Trees Generated by VG-NSL Figure 8 shows the sample trees generated by VG-NSL with the head-initial inductive bias (VGNSL+HI). All captions are chosen from the MSCOCO test set. 1859 XXXXX      cc # # a kitchen XXXXX      with XXXXX      bb " " two windows HHH    and HHH    two QQ   metal sinks (a) a kitchen with two windows and two metal sinks hhhhhhh ( ( ( ( ( ( ( HHH    a HHH    blue QQ   small plane aaa ! ! ! standing bb b " " " at QQ   the airstrip (b) a blue small plane standing at the airstrip PPPP     HHH    ZZ   young boy sitting HHH    on bb b " " " top QQ   of QQ   a briefcase (c) young boy sitting on top of a briefcase XXXXXX       PPPP     bb b " " " a ZZ   small dog eating aaa ! ! ! @@ a plate ZZ   of broccoli (d) a small dog eating a plate of broccoli 1860 hhhhhhhh ( ( ( ( ( ( ( ( hhhhhhhhhh ( ( ( ( ( ( ( ( ( ( PPPP P      ZZ   a building HHH    with HHH    a HHH    bunch cc # # of people HHH    standing around it (e) a building with a bunch of people standing around it XXXXX      aaa ! ! ! ll , , a horse walking aaa ! ! ! by aaa ! ! ! @ @ a tree bb " " in ZZ   the woods (f) a horse walking by a tree in the woods ``````` XXXXX      aaa ! ! ! the bb " " golden waffle bb " " has cc # # a banana \\   in it (g) the golden waffle has a banana in it . hhhhhhh ( ( ( ( ( ( ( aaa a ! ! ! ! @@ a bowl bb b " " " full ZZ   of oranges HHH    that HHH    still QQ   have stems (h) a bowl full of oranges that still have stems 1861 `````` bb b " " " there ZZ   is cc # # a person aaa a ! ! ! ! that PPPP     is XXXXX      sitting PPPP     QQ   in ll , , the boat bb " " on cc # # the water (i) there is a person that is sitting in the boat on the water XXXXX      PPPP     QQ   a sandwich cc # # and soup bb " " sit ZZ   on @@ a table (j) a sandwich and soup sit on a table XXXXXX       bb b " " " a bb " " big umbrella HHH    sitting bb " " on ZZ   the beach (k) a big umbrella sitting on the beach Figure 8: Examples of parsing trees generated by VG-NSL.
2019
180
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1862–1872 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1862 Stay on the Path: Instruction Fidelity in Vision-and-Language Navigation Vihan Jain∗ Gabriel Magalhaes∗ Alexander Ku∗ Ashish Vaswani Eugene Ie Jason Baldridge Google Research {vihan, gamaga, alexku, avaswani, eugeneie, jridge}@google.com Abstract Advances in learning and representations have reinvigorated work that connects language to other modalities. A particularly exciting direction is Vision-and-Language Navigation (VLN), in which agents interpret natural language instructions and visual scenes to move through environments and reach goals. Despite recent progress, current research leaves unclear how much of a role language understanding plays in this task, especially because dominant evaluation metrics have focused on goal completion rather than the sequence of actions corresponding to the instructions. Here, we highlight shortcomings of current metrics for the Room-to-Room dataset (Anderson et al., 2018b) and propose a new metric, Coverage weighted by Length Score (CLS). We also show that the existing paths in the dataset are not ideal for evaluating instruction following because they are direct-to-goal shortest paths. We join existing short paths to form more challenging extended paths to create a new data set, Room-for-Room (R4R). Using R4R and CLS, we show that agents that receive rewards for instruction fidelity outperform agents that focus on goal completion. 1 Introduction In Vision-and-Language Navigation (VLN) tasks, agents must follow natural language navigation instructions through either simulated (Macmahon et al., 2006; Yan et al., 2018; Bisk et al., 2018; Shah et al., 2018), simulations of realistic (Blukis et al., 2018; Misra et al., 2018) and real environments (Anderson et al., 2018b; de Vries et al., 2018; Chen et al., 2019; Cirik et al., 2018), or actual physical environments (Skoˇcaj et al., 2016; Thomason et al., 2018; Williams et al., 2018). Compared to other tasks involving co-grounding in visual and ∗Authors contributed equally. Figure 1: It’s the journey, not just the goal. To give language its due place in VLN, we compose paths in the R2R dataset to create longer, twistier R4R paths (blue). Under standard metrics, agents that head straight to the goal (red) are not penalized for ignoring the language instructions: for instance, SPL yields a perfect 1.0 score for the red and only 0.17 for the orange path. In contrast, our proposed CLS metric measures fidelity to the reference path, strongly preferring the agent with the orange path (0.87) over the red one (0.23). language modalities – such as image and video captioning (Donahue et al., 2017; Fang et al., 2015; Vinyals et al., 2015; Wang et al., 2018; Yu et al., 2016), visual question answering (VQA) (Antol et al., 2015; Yang et al., 2016), and visual dialog (Das et al., 2017) – VLN additionally requires agents to plan their actions, move, and dynamically respond to changes in their visual field. Photo-realistic simulations for VLN are especially promising: they retain messy, real world complexity and can draw on pre-trained models and rich data about the world, but do not require investment in and maintenance of physical robots and spaces for them. Given this, we focus on the Roomto-Room (R2R) task (Anderson et al., 2018b). Despite significant recent progress on R2R since its introduction (Fried et al., 2018; Ma et al., 2019; Wang et al., 2019), the structure of the dataset and current evaluation metrics greatly diminish the im1863 portance of language understanding for the task. The core problems are that paths in R2R are all direct-to-goal shortest paths and metrics are mostly based on goal completion rather than fidelity to the described path. To address this, we define a new metric, Coverage weighted by Length Score (CLS), and compose path pairs of R2R to create Room-forRoom (R4R), an algorithmically produced extension of R2R. Figure 1 illustrates path composition and the scores of two agent paths for both CLS and Success weighted by Path Length (SPL), a metric recently proposed by Anderson et al. (2018a). In the example, an agent which ignores the language but gets to the goal receives a perfect SPL score. Language is not irrelevant for R2R. Thomason et al. (2019) ablate visual and language inputs and find that withholding either from an action sampling agent reduces performance on unseen houses. Also, the generated instructions in the augmented paths of Fried et al. (2018) improved performance for several models. However, while many of these augmented instructions have clear starting or ending descriptions, the middle portions are often disconnected from the path they are paired with (see Huang et al. (2019) for in depth analysis of augmented path instructions). That these low-fidelity augmented instructions improve results indicates that current metrics are insensitive to instruction fidelity. Our new CLS metric measures how closely an agent’s trajectory conforms with the entire reference path, not just goal completion. Because the reference paths in R2R are all directto-goal, the importance of the actual journey taken from start to finish is diminished; as a result, fidelity between instructions and their corresponding paths is harder to evaluate. In longer, twistier paths, the importance of not always going directly to the goal becomes much clearer. We take advantage of the fact that the original R2R data contains many paths that have goals that coincide with the start points of other paths. By concatenating pairs of paths and their corresponding instructions, we create longer paths that allow us to better gauge the ability of an agent to stick to the path as described. With this data, Reinforced Cross-modal Matching models (Wang et al., 2019) that use CLS as a reward signal dramatically improve not only CLS (from 20.4% for the agent with goal-oriented rewards to 34.6%), but navigation error also reduces from 8.45m to 8.08m on the the Validation Unseen dataset. Furthermore, we find that the agent with goal-oriented rewards obtains the same CLS (20.4) on R4R regardless of whether the full instruction or only the last five tokens are provided to it. In contrast, the CLS-rewarded agent drops from CLS of 34.6 to 25.3 when given only the last five tokens. 2 Extending R2R to create R4R Instructions such as “Turn left, walk up the stairs. Enter the bathroom.” are easy for people but challenging for computational agents. Agents must segment instructions, set sub-goals based on understanding them and ground the language and their actions in real world objects and dynamics. An agent may need expectations for how spatial scenes change when turning. Additionally, it must recognize visual and environmental features that indicate it has entered or encountered something referred to as “the bathroom” and know to stop. 2.1 Room-to-Room (R2R) Room-to-Room (R2R) supports visually-grounded natural language navigation in a photo-realistic environment (Anderson et al., 2018b). R2R consists of an environment and language instructions paired to reference paths. The environment defines a graph where nodes are possible positions an agent may inhabit. Edges indicate that a direct path between two nodes is navigable. For each node, R2R provides an egocentric panoramic view. All images are collected from buildings and house interiors. The paths paired with language instructions are composed by sequences of nodes in this graph. For data collection, starting and goal nodes are sampled from the graph and the shortest path between those nodes is taken, provided it is no shorter than 5m and contains between 4 and 6 edges. Each path has 3 associated natural language instructions, with an average length of 29 words and a total vocabulary of 3.1k words. Apart from the training set, the dataset includes two validation sets and a test set. One of the validation sets includes new instructions on environments overlapping with the training set (Validation Seen), and the other is entirely disjoint from the training set (Validation Unseen). Fried et al. (2018) propose a follower model which is trained using student forcing, where actions are sampled from the agent’s decisions, but supervised using the action that takes the agent closest to the goal. During inference, the follower generates candidate paths which are then scored by a speaker model. The speaker model was also used 1864 Figure 2: An example of an extended path in the R4R dataset, where the dotted blue arrow connects two blue paths with solid arrows, corresponding to the instructions “Make a left down at the narrow hall beside the office and walk straight to the exit door. Go out the door and wait.” and “Turn around and enter the bedroom. Walk to the other side of the room and turn left. Walk into the doorway leading out and stop.”. The shortestto-goal path from the starting point is shown in orange. for creating an augmented dataset that is used as an extension of training data by the follower model as well as by many subsequently published models. Wang et al. (2019) train their agents using policy gradients. At every step, the agent is rewarded for getting closer to the target location (extrinsic reward) as well as for choosing an action that reduces cycle-reconstruction error between instruction generated by a matching critic and ground-truth instruction (intrinsic reward). In both papers, there is little analysis presented about the generative models. Recently, Anderson et al. (2018a) pointed out weaknesses in the commonly used metrics for evaluating the effectiveness of agents trained on these tasks. A new metric, Success weighted by Path Length (SPL) was proposed that penalized agents for taking long paths. Any agent using beam search (e.g. Fried et al. (2018)), is penalized heavily by this metric. There have also been concerns about structural biases present in these datasets which may provide hidden shortcuts to agents training on these problems. Thomason et al. (2019) presented an analysis on R2R dataset, where the trained agent continued to perform surprisingly well in the absence of language inputs. 2.2 Room-for-Room (R4R) Due to the process by which the data are generated, all R2R reference paths are shortest-to-goal paths. Because of this property, conformity to the instructions is decoupled from reaching the desired destination – and this short-changes the language perspective. In a broader scope of reference paths, the importance of following language in#samples PL(R) d(r1, r|R|) R2R Train 14039 9.91 9.91 Val. seen 1021 10.2 10.2 Val. unseen 2249 9.50 9.50 R4R Train 233613 20.6 10.5 Val. seen 1035 20.4 11.1 Val. unseen 45162 20.2 10.1 Table 1: Comparison of R2R to R4R. PL(R) represents the mean path length of the reference paths and d(r1, r|R|) is mean length of the shortest-to-goal path. structions in their entirety becomes clearer, and proper evaluation of this conformity can be better studied. Additionally, the fact that the largest path in the dataset has only 6 edges exacerbates the challenge of properly evaluating conformity. This motivates the need for a dataset with larger and more diverse reference paths. To address the lack of path variety, we propose a data augmentation strategy1 that introduces long, twisty paths without additional human or low-fidelity machine annotations (e.g. those from Fried et al. (2018)). Existing paths in the dataset can be extended by joining them with other paths that start within some threshold of where they end. Formally, two paths A=(a1, a2, · · · , a|A|) and B=(b1, b2, · · · , b|B|) are joined if d(a|A|, b1)<dth. The resulting extended paths are thus R=(a1, · · · , a|A|, c1, · · · , c|C|, b1, · · · , b|B|), where C = (c1, c2, · · · , c|C|) is the shortest path between a|A| and b1. (If a|A|=b1, C is empty.) Each combination of instructions corresponding to paths A and B is included in R4R. Since each path maps to multiple human-annotated instructions, each extended path will map to NA ·NB joined instructions, where NA and NB are the number of annotations associated with paths A and B, respectively. Figure 2 shows an example of an extended path and the corresponding instructions, compared to the shortest-to-goal path. 3 Evaluation Metrics in VLN Historically, the performance of VLN models has been evaluated with respect to the objective of reaching the goal location. The nature of the path an agent takes, however, is of clear practical importance: it is undesirable for any robotic agent in the physical world to reach the destination by taking a 1R2R-to-R4R code is at https://github.com/googleresearch/google-research/tree/master/r4r 1865 Figure 3: From left to right, the distribution of the number of steps, path lengths, direct-to-goal path lengths and instruction lengths in the original R2R and extended R4R datasets. different path than what it was instructed to follow; failure to comply with instructions might lead to navigating unwanted and potentially dangerous locations. Here, we propose a series of desiderata for VLN metrics and introduce Coverage weighted by Length Score (CLS). Table 2 provides a high level summary of this section’s contents. 3.1 Desiderata Commonly, navigation tasks are defined in a discrete space: the environment determines a graph where each node is a position the agent could be in and each edge between two nodes represents that there is a navigable step between them. Let the predicted path P = (p1, p2, p3, ..., p|P|) be the sequence of nodes visited by the agent and reference path R = (r1, r2, r3, ..., r|R|) be the sequence of nodes in the reference trajectory. Generally, p1 = r1, since in many VLN tasks, the agent begins at the reference path’s start node. The following desiderata characterize metrics that gauge the fidelity of P with respect to R rather than just goal completion. Throughout the paper, we refer to the subsequent desired properties as Desideratum (i). (1) Path similarity measure. Metrics should characterize a notion of similarity between a predicted path P and a reference path R. This implies that metrics should depend on all nodes in P and all nodes in R, which contrasts with many common metrics which only consider the last node in the reference path (see Section 3.2). Metrics should penalize deviations from the reference path, even if they lead to the same goal. This is not only prudent, as agents might wander around undesired terrain if this is not enforced, but also explicitly gauges the fidelity of the predictions with respect to the provided language instructions. (2) Soft penalties. Metrics should penalize differences from the reference path according to a soft notion of dissimilarity that depends on distances in the graph. This ensures that larger discrepancies are penalized more severely than smaller ones and that metrics should not rely only on dichotomous views of intersection. For instance, a predicted path that has no intersection to the reference path, but follows it closely, as illustrated in Figure 1 should not be penalized too severely. (3) Unique optimum. Metrics should yield a perfect score if and only if the reference and predicted paths are an exact match. This ensures that the perfect score is unambiguous: the reference path R is therefore treated as a golden standard. No other path should have the same or higher score as the reference path itself. (4) Scale invariance. Metrics should be consistent over different datasets. (5) Computational tractability. Metrics should be pragmatic, allowing fast automated evaluation of performance in navigation tasks. 3.2 Existing Navigation Metrics Table 2 defines previous navigation metrics and how they match our desiderata. We denote by d(n, m) the shortest distance between two nodes along the edges of the graph and d(n, P) = minp∈P d(n, p) the shortest distance between a node and a path. All distances are computed along the edges of the graph determined by the environment, which are not necessarily equal to the euclidean distance between the nodes. Path Length (PL) measures the total length of the predicted path, which has the optimal value equal to the length of the reference path. Navigation Error (NE) measures the distance between the last node in the predicted path and the last reference path node. Oracle Navigation Error (ONE) measures the shortest distance from any node in the predicted path to the last reference path node. Success Rate (SR) measures how often the last node in the predicted path is within a threshold distance 1866 Metric ↑↓Definition Desiderata coverage (1) (2) (3) (4) (5) Path Length (PL) P 1≤i<|P| d(pi, pi+1)   Navigation Error (NE) ↓ d(p|P|, r|R|)   Oracle Navigation Error (ONE) ↓ minp∈P d(p, r|R|)   Success Rate (SR) ↑ 1[NE(P, R) ≤dth]   Oracle Success Rate (OSR) ↑ 1[ONE(P, R) ≤dth]   Success weighted by PL (SPL) ↑ SR(P, R) · d(p1, r|R|) max{PL(P), d(p1, r|R|)}    Success weighted by Edit Distance (SED) ↑ SR(P, R)  1 − ED(P, R) max {|P|, |R|} −1      Coverage weighted by LS (CLS) ↑ PC(P, R) · LS(P, R)      Table 2: Definition and desiderata coverage of navigation metrics. dth of the last reference path node. Oracle Success Rate (OSR) measures how often any node in the predicted path is within a threshold distance dth of the last node in the reference path. Success weighted by Path Length (SPL) (Anderson et al., 2018a) takes into account both Success Rate and the normalized path length. It was proposed as a single summary measure for navigation tasks. Note that the agent should maximize this metric, and it is only greater than 0 if the success criteria was met. While this metric is ideally suited when the evaluating whether the agent successfully reached the desired destination, it does not take into account any notion of similarity between the predicted and reference trajectories and fails to take into account the intermediary nodes in the reference path. As such, it violates Desideratum (1). Since there could exist more than one path with optimal length to the desired destination, it also violates Desideratum (3). Success weighted by Edit Distance (SED) (Chen et al., 2019) is based on the edit distance ED(P, R) between the two paths, equal to the Levenshtein distance between the two sequences of actions AP = ((p1, p2), (p2, p3), ..., (p|P|−1, p|P|)) and AR = ((r1, r2), (r2, r3), ..., (r|R|−1, r|R|)). The Levenshtein distance is the minimum number of edit operations (insertion, deletion and substitution of actions) that can transform path AR into AP . Similarly to SPL, SED is also multiplied by SR(P, R), so only paths that meet the success criteria receive a score greater than 0. This metric naturally satisfies Desideratum (1), (3) and (4). Further, it is possible to compute it using dynamic programming in O(|P||R|), further satisfying Desideratum Figure 4: With respect to the blue path, SED yields zero for both the orange and red paths, while CLS yields a score of 0.89 for orange and 0.48 for red. (5). Desideratum (2), however, is left unsatisfied, as SED does not take into account how two actions differ from each other (considering, for instance, the graph distance between their end nodes), but only if they are the same or not. This subtle but important difference is illustrated in Figure 4. 3.3 Coverage weighted by Length Score We introduce Coverage weighted by Length Score (CLS) as a single summary measure for VLN. CLS is the product of the Path Coverage (PC) and Length Score (LS) of the agent’s path P with respect to reference path R: CLS(P, R) = PC(P, R) · LS(P, R) (1) PC replaces SR as a non-binary measure of how well the reference path is covered by the agent’s path. It is the average coverage of each node in the reference path R with respect to path P: PC(P, R) = 1 |R| X r∈R exp  −d(r, P) dth  (2) 1867 where d(r, P)= minp∈P d(r, p) is the distance to reference path node r from the nearest node in P. The coverage contribution for each node r is an exponential decay of this distance. (1/dth is a decay constant to account for graph scale.) LS compares the predicted path length PL(P) to EPL, the expected optimal length given R’s coverage of P. If say, the predicted path covers only half of the reference path (i.e., PC = 0.5), then we expect the optimal length of the predicted path to be half of the length of the reference path. As a result, EPL is given by: EPL(P, R) = PC(P, R) · PL(R) (3) LS for a predicted path P is optimal only if PL(P) is equal to the expected optimal length – it is penalized when the predicted path length is shorter or longer than the expected path length: LS(P, R) = EPL(P, R) EPL(P, R) + |EPL(P, R) −PL(P)| (4) There is a clear parallel between the terms of CLS and SPL. CLS replaces success rate, the first term of SPL, with path coverage, a continuous indicator for measuring how well the predicted path covered the nodes on the reference path. Unlike SR, PC is sensitive to the intermediary nodes in the reference path R. The second term of SPL penalizes the path length PL(P) of the predicted path against the optimal (shortest) path length d(p1, r|R|); CLS replaces that with length score LS, which penalizes the agent path length PL(P) against EPL, the expected optimal length for its coverage of R. CLS naturally covers Desideratum (1) and (2). Assuming that the reference path is acyclic and that p1 = r1, i.e., reference and predicted path start at the same node, Desideratum (3) is also satisfied. Additionally, CLS also covers Desideratum (4) because PC and LS are both invariant to the graph scale (due to the term dth). Finally, the distances from each pair of nodes in the graph can be pre-computed using Dijkstra’s algorithm (Dijkstra, 1959) for each node, resulting in a complexity of O(EV + V 2 log(V )), where V and E are the number of vertices and edges in the graph, respectively. PC(P, R) can be computed in O(|P||R|), and LS(P, R) can be computed in O(|P| + |R|), making CLS satisfy Desideratum (5). 4 Agent We reimplement the Reinforced Cross-Modal Matching (RCM) agent of Wang et al. (2019) and extend it to use a reward function based on both CLS (Section 3.3) as well as success rate. 4.1 Navigator The reasoning navigator of Wang et al. (2019) learns a policy πθ over parameters θ that map the natural language instruction X and the initial visual scene v1 to a sequence of actions a1..T . At time step t, the agent state is modeled using a LSTM (Hochreiter and Schmidhuber, 1997) that encodes the trajectory of past visual scenes and agent actions, ht=LSTM([vt; at−1], ht−1), where vt is the output of visual encoder as described below. Language Encoder Language instructions X = x1..n are initialized with pre-trained GloVe word embeddings (Pennington et al., 2014) that are finetuned during training. We restrict the GloVe vocabulary to tokens that occur at least five times in the instruction data set. All out of vocabulary tokens are mapped to a single OOV identifier. Using a bidirectional recurrent network (Schuster and Paliwal, 1997) we encode the instruction into language contextual representations w1..n. Visual Features As in Fried et al. (2018), at each time step t, the agent perceives a 360-degree panoramic view of its surroundings from the current location. The view is discretized into m view angles (m = 36 in our implementation, 3 elevations x 12 headings at 30-degree intervals). The image at view angle i, heading angle φ and elevation angle θ is represented by a concatenation of the pre-trained CNN image features with the 4-dimensional orientation feature [sin φ; cos φ; sin θ; cos θ] to form vt,i. The visual encoder pools the representation of all view angles vt,1..m using attention over the previous agent state ht−1. vt = Attention(ht−1, vt,1..m) (5) The actions available to the agent at time t are denoted as ut,1..l, where ut,j is the representation of navigable direction j from the current location obtained similarly to vt,i (Fried et al., 2018). The number of available actions, l, varies for different locations, since nodes in the graph have different number of connections. Action Predictor As in Wang et al. (2019), the 1868 model predicts the probability pk of each navigable direction k using a bilinear dot product. pk = softmax([ht; ctext t ; cvisual t ]Wc(ut,kWu)T ) (6) ctext t = Attention(ht, w1..n) (7) cvisual t = Attention(ctext t , vt,1..m) (8) 4.2 Learning Training is performed using two separate phases, (1) behavioral cloning (Bain and Sammut, 1999; Wang et al., 2019; Daftry et al., 2016) and (2) REINFORCE policy gradient updates (Williams, 1992). As is common in cases where expert demonstrations are available, the agent’s policy is initialized using behavior cloning to constrain the learning algorithm to first model state-action spaces that are most relevant to the task, effectively warm starting the agent with a good initial policy. No reward shaping is required during this phase as behavior cloning corresponds to solving the following maximum-likelihood problem, max θ X (s,a)∈D log πθ(a|s) (9) where D is the demonstration data set. After warm starting the model with behavioral cloning, we obtain standard policy gradient updates by sampling action sequences from the agent’s behavior policy. As in standard policy gradient updates, the model is optimized by minimizing the loss function LPG whose gradient is the negative policy gradient estimator (Williams, 1992). LPG = −ˆEt[log πθ(at|st) ˆAt] (10) where the expectation ˆEt is taken over a finite batch of sample trajectories generated by the agent’s stochastic policy πθ. To reduce variance, we scale the gradient using the advantage function ˆAt=Rt−ˆbt. (Rt= P∞ i=t γi−tri is the observed γdiscounted episodic return and ˆbt is the estimated value of the agent’s current state at time t.) The models are trained using mini-batch gradient descent. Our experiments show that interleaving behavioral cloning and policy gradient training phases improves performance on the validation set. Specifically we interleaved each policy gradient update batch with K behaviour cloning batches, with the value of K decaying exponentially, such that the training strategy asymptotically becomes only policy gradient updates. 4.3 Reward For consistency with the established benchmark (Wang et al., 2019), we implemented a dense goaloriented reward function that optimizes the success rate metric. This includes an immediate reward at time step t in an episode of length T, given by: r(st, at) =      d(st, r|R|)− d(st+1, r|R|) if t < T 1[d(sT , r|R|) ≤dth] if t = T (11) where d(st, r|R|) is the distance between st and target location r|R|, 1[·] is the indicator function, dth is the maximum distance from r|R| that the agent is allowed to terminate for success. To incentivize the agent to not only reach the target location but also to conform to the reference path, we also train our agents with following fidelity-oriented sparse reward: r(st, at) =      0 if t < T 1[d(sT , r|R|) ≤dth]+ CLS(s1...T , R) if t = T (12) where R is the reference path in the dataset associated with the instruction X. This rewards actions that are consistent both with reaching the goal and following the path corresponding to the language instructions. It is worth noting here that, similar to Equation 11, a relative improvement in CLS can be added as a reward-shaping term for time steps t < T, however empirically we did not find noticeable difference in the performance of agents trained with or without the shaping term. For simplicity, all of the experiments involving fidelity-oriented reward use the sparse reward in Equation 12. 5 Results We obtain the performance of models trained under two training objectives. The first is goal oriented (Equation 11): agents trained using this reward are encouraged to pursue only the last node in the reference path. The second is fidelity oriented (Equation 12): agents trained using this reward receive credit not only for reaching the target location successfully but also for conforming to the reference path. We report the performance on standard metrics (PL, NE, SR, SPL) as well as the new CLS metric. 1869 To further explore the role of language, we perform ablation studies, where agents are trained using the full language instructions and evaluated on partial (last 5 tokens) or no instructions. With no instructions, the agent only has the full visual input, similar to the unimodal ablation studies of Thomason et al. (2019). To eliminate the effect observed due to distribution shift during evaluation and preserve the length distribution of the input instructions, we further conducted studies where agents are given arbitrary instructions from the validation set, with the reference path remaining unaltered. We observed that experiments with arbitrary instruction had similar results to studies where instructions where fully removed. On the R4R dataset, the fidelity oriented agent significantly outperforms the goal oriented agent (> 14% absolute improvement in CLS), demonstrating that including CLS in the reward signal successfully produces better conformity to the reference trajectories. Furthermore, on Validation Unseen, when all but the last 5 tokens of instructions are removed, the goal oriented agent yields the same CLS as with the full instructions, while the fidelity oriented agent suffers significantly, decaying from 34.6% to 25.3%. This indicates that including fidelity measurements as reward signals improve the agent’s reliance on language instructions– thereby better keeping the L in VLN. 5.1 R2R Performance Table 3 summarizes the experiments on R2R.2 There are not major differences between goal oriented and fidelity oriented agents, highlighting the problematic nature of R2R paths with respect to instruction following: essentially, rewards that only take into account the goal implicitly signals path conformity—by the construction of the dataset itself. As a result, an agent optimized to reach the 2Our goal oriented results match the RCM benchmark on validation unseen but are lower on validation seen. We suspect this is due to differences in implementation details and hyper-parameter choices. 3For the random evaluation, we first sample the number of edges in the trajectory from the distribution of number of edges in the reference paths of the training dataset. Then, for each node, we uniformly sample between its neighbors and move the agent there. We report the average metrics for 1 million random trajectories. 4As in Wang et al. (2019), we report the performance of Speaker-Follower model from Fried et al. (2018) that utilizes panoramic action space and augmented data but no beam search (pragmatic inference) for a fair comparison. 5We report the performance of the RCM model without intrinsic reward as the benchmark. target destination may incidentally appear to be conforming to the instructions. The results shown in Section 5.2 further confirm this hypothesis by training and evaluate goal oriented and target oriented agents on R4R dataset. As evidenced by the ablation studies, models draw some signal from the language instructions. However, having the last five tokens makes up for a significant portion of the gap between no instructions and full instructions, again highlighting problems with R2R and the importance in R2R of identifying the right place to stop rather than following the path. The performance of both the agents degrade in similar proportions when instructions are partially or fully removed. Finally, as expected, the SPL metric appears consistent with CLS on R2R, since all reference paths are shortest-to-goal. As highlighted in Section 5.2, this breaks in settings where paths twist and turn. 5.2 R4R Performance Table 4 shows the results on R4R. Overall, the scores for all model variants on R4R are much lower than R2R, which highlights the additional challenge of following longer instructions for longer paths. Most importantly, the fidelity oriented agent significantly outperforms the goal oriented agent for both CLS and navigation error, demonstrating the importance of both measuring path fidelity and using it to guide agent learning. On the experiments, the goal oriented agent continues to exploit biases and the underlying structure in the environment to reach the goal. When the instructions are removed during evaluation, the agent’s performance on the CLS metric barely degrades, showing that the agent does not rely significantly on the instructions for its performance. In contrast, the fidelity oriented agent learns to pursue conformity to the reference path, which in turn requires attending more carefully to the instructions. When instructions are removed during evaluation, performance of the fidelity oriented agent degrades considerably on the CLS metric. In fact, the fidelity oriented agent performs better on CLS metric without instructions as the goal oriented agent performs with the full instructions. Furthermore, we highlight that historically dominant metrics are ineffective – even misleading – for measuring agents’ performance: for instance, especially for reference paths that begin and end at close locations, SPL is a poor measure of suc1870 Validation Seen Validation Unseen # Model PL NE ↓SR ↑SPL ↑CLS ↑ PL NE ↓SR ↑SPL ↑CLS ↑ 0 Random3 10.4 9.82 5.0 3.7 29.4 9.32 9.32 5.2 4.0 29.0 1 Speaker-Follower (Fried et al., 2018)4 3.36 66.4 6.62 35.5 2 RCM (Wang et al., 2019)5 12.1 3.25 67.6 15.0 6.01 40.6 3 Speaker-Follower 15.5 4.98 50.1 40.1 54.8 15.2 6.36 35.3 28.1 42.9 4 RCM, goal oriented 13.7 4.48 55.3 47.9 61.1 14.8 6.00 41.1 32.7 47.4 5 last 5 tokens 16.9 7.35 26.5 22.2 39.0 15.1 8.16 22.2 17.2 35.1 6 no instructions 21.1 7.78 22.3 11.6 27.5 17.7 8.69 13.0 9.4 26.1 7 RCM, fidelity oriented 12.2 4.63 57.3 50.7 60.2 13.2 6.38 40.8 35.1 50.9 8 last 5 tokens 13.4 8.08 27.8 23.5 42.4 14.4 8.29 23.2 17.7 35.5 9 no instructions 20.1 8.95 18.2 8.8 24.8 20.5 8.76 14.3 6.2 22.7 Table 3: Results on R2R Validation Seen and Validation Unseen sets. Rows 0 and 3-9 shows numbers from our implementations. SR, SPL and CLS are reported as percentages and NE and PL in meters. Validation Seen Validation Unseen # Model PL NE ↓SR ↑SPL ↑CLS ↑ PL NE ↓SR ↑SPL ↑CLS ↑ 0 Random3 21.8 11.4 13.1 2.0 23.1 23.6 10.4 13.8 2.2 22.3 1 Speaker-Follower 15.4 5.35 51.9 37.3 46.4 19.9 8.47 23.8 12.2 29.6 2 RCM, goal oriented 24.5 5.11 55.5 32.3 40.4 32.5 8.45 28.6 10.2 20.4 3 last 5 tokens 29.5 8.73 26.4 12.4 35.1 29.5 9.04 23.4 4.5 20.4 4 no instructions 32.3 9.50 20.7 8.0 33.3 34.0 9.45 19.0 2.3 17.4 5 RCM, fidelity oriented 18.8 5.37 52.6 30.6 55.3 28.5 8.08 26.1 7.7 34.6 6 last 5 tokens 17.1 8.88 24.8 11.7 39.3 25.5 8.52 18.9 5.6 25.3 7 no instructions 12.7 10.5 12.1 5.4 37.2 22.8 9.41 15.5 4.9 23.0 Table 4: Results on R4R Validation Seen and Validation Unseen sets (see Section 2). SR, SPL and CLS are reported as percentages and NE and PL in meters. cess since it assumes the optimal path length is the shortest distance between the starting and ending positions (as illustrated in Figure 1, for example). This is particularly noticeable from the results: the goal oriented agent gets better SPL scores than the fidelity oriented agent, even when it has massively poorer performance on conformity (CLS). 6 Conclusion The CLS metric, R4R, and our experiments provide a better toolkit for measuring the impact of better language understanding in VLN. Furthermore, our findings suggests ways that future datasets and metrics for judging agents should be constructed and set up for evaluation. The R4R data itself clearly still has considerable headroom: our reimplementation of the RCM model gets only 34.6 CLS on paths in R4R’s Validation Unseen houses. Keeping in mind that humans have an average navigation error of 1.61 in R2R (Anderson et al., 2018b), the average navigation error of 8.08 meters for R4R by our best agent leaves plenty of headroom. Future agents will need to make effective use of language and its connection to the environment to both drive CLS up and bring NE down in R4R. We expect path fidelity to not only be interesting with respect to grounding language, but to be crucial for many VLN-based problems. For example, future extensions of VLN will likely involve games (Baldridge et al., 2018) where the instructions being given take the agent around a trap or help it avoid opponents. Similar constraints could hold in search-and-rescue human-robot teams (Kruijff et al., 2014; Kruijff-Korbayov et al., 2016), where the direct path could take a rolling robot into an area with greater danger of collapse. In such scenarios, going straight to the goal could be literally deadly to the robot or agent. Acknowledgments We would like to thank our anonymous reviewers and the Google Research team, especially Radu Soricut, for the insightful comments that contributed to this paper. 1871 References Peter Anderson, Angel Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, et al. 2018a. On evaluation of embodied navigation agents. arXiv preprint arXiv:1807.06757 . Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko S¨underhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. 2018b. Visionand-language navigation: Interpreting visuallygrounded navigation instructions in real environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. 2015. VQA: Visual question answering. In 2015 IEEE International Conference on Computer Vision (ICCV). pages 2425–2433. Michael Bain and Claude Sammut. 1999. A framework for behavioural cloning. In Machine Intelligence 15, Intelligent Agents [St. Catherine’s College, Oxford, July 1995]. Oxford University, Oxford, UK, UK, pages 103–129. Jason Baldridge, Tania Bedrax-Weiss, Daphne Luong, Srini Narayanan, Bo Pang, Fernando Pereira, Radu Soricut, Michael Tseng, and Yuan Zhang. 2018. Points, paths, and playscapes: Large-scale spatial language understanding tasks set in the real world. In Proceedings of the First International Workshop on Spatial Language Understanding. Association for Computational Linguistics, New Orleans, pages 46–52. Yonatan Bisk, Kevin Shih, Yejin Choi, and Daniel Marcu. 2018. Learning interpretable spatial operations in a rich 3d blocks world. In Proceedings of the Thirty-Second Conference on Artificial Intelligence (AAAI-18). New Orleans, USA. Valts Blukis, Dipendra Misra, Ross A. Knepper, and Yoav Artzi. 2018. Mapping navigation instructions to continuous control actions with position visitation prediction. In Proceedings of the Conference on Robot Learning. Howard Chen, Alane Suhr, Dipendra Misra, and Yoav Artzi. 2019. Touchdown: Natural language navigation and spatial reasoning in visual street environments. In Conference on Computer Vision and Pattern Recognition. Volkan Cirik, Yuan Zhang, and Jason Baldridge. 2018. Following formulaic map instructions in a street simulation environment. NIPS Visually Grounded Interaction and Language Workshop . Shreyansh Daftry, J. Andrew Bagnell, and Martial Hebert. 2016. Learning transferable policies for monocular reactive MAV control. In International Symposium on Experimental Robotics. Springer, pages 3–11. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e M.F. Moura, Devi Parikh, and Dhruv Batra. 2017. Visual Dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Harm de Vries, Kurt Shuster, Dhruv Batra, Devi Parikh, Jason Weston, and Douwe Kiela. 2018. Talk the walk: Navigating new york city through grounded dialogue. CoRR abs/1807.03367. Edsger W Dijkstra. 1959. A note on two problems in connexion with graphs. Numerische mathematik 1(1):269–271. J. Donahue, L. A. Hendricks, M. Rohrbach, S. Venugopalan, S. Guadarrama, K. Saenko, and T. Darrell. 2017. Long-term recurrent convolutional networks for visual recognition and description. IEEE Transactions on Pattern Analysis and Machine Intelligence 39(4):677–691. H. Fang, S. Gupta, F. Iandola, R. K. Srivastava, L. Deng, P. Dollr, J. Gao, X. He, M. Mitchell, J. C. Platt, C. L. Zitnick, and G. Zweig. 2015. From captions to visual concepts and back. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pages 1473–1482. Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, and Trevor Darrell. 2018. Speaker-Follower models for Vision-and-Language Navigation. In Neural Information Processing Systems (NeurIPS). Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 9(8):1735– 1780. Haoshuo Huang, Vihan Jain, Harsh Mehta, Jason Baldridge, and Eugene Ie. 2019. Multi-modal discriminative model for vision-and-language navigation. In Proceedings of the Combined Workshop on Spatial Language Understanding and Grounded Communication for Robotics (SpLURoboNLP-2019). Association for Computational Linguistics, Minneapolis. G.J.M. Kruijff, Ivana Kruijff-Korbayova, Shanker Keshavdas, Benoit Larochelle, Miroslav Janicek, Francis Colas, Ming Liu, Franois Pomerleau, Roland Siegwart, Mark Neerincx, Rosemarijn Looije, Nanja Smets, Tina Mioch, Jurriaan Diggelen, Fiora Pirri, Mario Gianni, Federico Ferri, Matteo Menna, Rainer Worst, and Vaclav Hlavac. 2014. Designing, developing, and deploying systems to support humanrobot teams in disaster response. Advanced Robotics 28. I. Kruijff-Korbayov, L. Freda, M. Gianni, V. Ntouskos, V. Hlav, V. Kubelka, E. Zimmermann, H. Surmann, K. Dulic, W. Rottner, and E. Gissi. 2016. Deployment of ground and aerial robots in earthquakestruck amatrice in italy (brief report). In 2016 IEEE 1872 International Symposium on Safety, Security, and Rescue Robotics (SSRR). pages 278–279. Chih-Yao Ma, Jiasen Lu, Zuxuan Wu, Ghassan Alregib, Zsolt Kira, Richard Socher, and Caiming Xiong. 2019. Self-monitoring navigation agent via auxiliary progress estimation. In Proceedings of the International Conference on Learning Representations (ICLR). Matt Macmahon, Brian Stankiewicz, and Benjamin Kuipers. 2006. Walk the talk: Connecting language, knowledge, action in route instructions. In Proceedings of the National Conference on Artificial Intelligence (AAAI). pages 1475–1482. Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, and Yoav Artzi. 2018. Mapping instructions to actions in 3D environments with visual goal prediction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Brussels, Belgium, pages 2667– 2678. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 1532–1543. M. Schuster and K.K. Paliwal. 1997. Bidirectional recurrent neural networks. Trans. Sig. Proc. 45(11):2673–2681. Pararth Shah, Marek Fiser, Aleksandra Faust, Chase Kew, and Dilek Hakkani-Tur. 2018. FollowNet: Robot navigation by following natural language directions with deep reinforcement learning. In Third Machine Learning in Planning and Control of Robot Motion Workshop at ICRA. D. Skoˇcaj, A. Vreˇcko, M. Mahniˇc, M. Jan´ıˇcek, G.J. M. Kruijff, M. Hanheide, N. Hawes, J. L. Wyatt, T. Keller, K. Zhou, M. Zillich, and M. Kristan. 2016. An integrated system for interactive continuous learning of categorical knowledge. Journal of Experimental & Theoretical Artificial Intelligence 28:823–848. Jesse Thomason, Daniel Gordon, and Yonatan Bisk. 2019. Shifting the baseline: Single modality performance on visual navigation & QA. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Jesse Thomason, Jivko Sinapov, Raymond Mooney, and Peter Stone. 2018. Guiding exploratory behaviors for multi-modal grounding of linguistic descriptions. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pages 3156–3164. Xin Wang, Wenhu Chen, Jiawei Wu, Yuan-Fang Wang, and William Yang Wang. 2018. Video captioning via hierarchical reinforcement learning. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition pages 4213–4222. Xin Wang, Qiuyuan Huang, Asli C¸ elikyilmaz, Jianfeng Gao, Dinghan Shen, Yuan-Fang Wang, William Yang Wang, and Lei Zhang. 2019. Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Edward C. Williams, Nakul Gopalan, Mina Rhee, and Stefanie Tellex. 2018. Learning to parse natural language to grounded reward functions with weak supervision. In 2018 IEEE International Conference on Robotics and Automation, ICRA 2018, Brisbane, Australia, May 21-25, 2018. pages 1–7. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning 8(3):229–256. Claudia Yan, Dipendra Misra, Andrew Bennett, Aaron Walsman, Yonatan Bisk, and Yoav Artzi. 2018. CHALET: Cornell House Agent Learning Environment. CoRR abs/1801.07357. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alexander J. Smola. 2016. Stacked attention networks for image question answering. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pages 21–29. Haonan Yu, Jiang Wang, Zhiheng Huang, Yi Yang, and Wei Xu. 2016. Video paragraph captioning using hierarchical recurrent neural networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016. pages 4584–4593.
2019
181
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1873–1883 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1873 Expressing Visual Relationships via Language Hao Tan1, Franck Dernoncourt2, Zhe Lin2, Trung Bui2, Mohit Bansal1 1UNC Chapel Hill 2Adobe Research {haotan, mbansal}@cs.unc.edu, {dernonco, zlin, bui}@adobe.com Abstract Describing images with text is a fundamental problem in vision-language research. Current studies in this domain mostly focus on single image captioning. However, in various real applications (e.g., image editing, difference interpretation, and retrieval), generating relational captions for two images, can also be very useful. This important problem has not been explored mostly due to lack of datasets and effective models. To push forward the research in this direction, we first introduce a new language-guided image editing dataset that contains a large number of real image pairs with corresponding editing instructions. We then propose a new relational speaker model based on an encoder-decoder architecture with static relational attention and sequential multi-head attention. We also extend the model with dynamic relational attention, which calculates visual alignment while decoding. Our models are evaluated on our newly collected and two public datasets consisting of image pairs annotated with relationship sentences. Experimental results, based on both automatic and human evaluation, demonstrate that our model outperforms all baselines and existing methods on all the datasets.1 1 Introduction Generating captions to describe natural images is a fundamental research problem at the intersection of computer vision and natural language processing. Single image captioning (Mori et al., 1999; Farhadi et al., 2010; Kulkarni et al., 2011) has many practical applications such as text-based image search, photo curation, assisting of visuallyimpaired people, image understanding in social 1Our data and code are publicly available at: https://github.com/airsplay/ VisualRelationships Remove the people from the picture. Relational Speaker Figure 1: An example result of our method showing the input image pair from our Image Editing Request dataset, and the output instruction predicted by our relational speaker model trained on the dataset. media, etc. This task has drawn significant attention in the research community with numerous studies (Vinyals et al., 2015; Xu et al., 2015; Anderson et al., 2018), and recent state of the art methods have achieved promising results on large captioning datasets, such as MS COCO (Lin et al., 2014). Besides single image captioning, the community has also explored other visual captioning problems such as video captioning (Venugopalan et al., 2015; Xu et al., 2016), and referring expressions (Kazemzadeh et al., 2014; Yu et al., 2017). However, the problem of two-image captioning, especially the task of describing the relationships and differences between two images, is still underexplored. In this paper, we focus on advancing research in this challenging problem by introducing a new dataset and proposing novel neural relational-speaker models.2 To the best of our knowledge, Jhamtani and Berg-Kirkpatrick (2018) is the only public dataset aimed at generating natural language descriptions for two real images. This dataset is about ‘spotting the difference’, and hence focuses more on describing exhaustive differences by learning align2We will release the full data and code upon publication. 1874 ments between multiple text descriptions and multiple image regions; hence the differences are intended to be explicitly identifiable by subtracting two images. There are many other tasks that require more diverse, detailed and implicit relationships between two images. Interpreting image editing effects with instructions is a suitable task for this purpose, because it has requirements of exploiting visual transformations and it is widely used in real life, such as explanation of complex image editing effects for laypersons or visuallyimpaired users, image edit or tutorial retrieval, and language-guided image editing systems. We first build a new language-guided image editing dataset with high quality annotations by (1) crawling image pairs from real image editing request websites, (2) annotating editing instructions via Amazon Mechanical Turk, and (3) refining the annotations through experts. Next, we propose a new neural speaker model for generating sentences that describe the visual relationship between a pair of images. Our model is general and not dependent on any specific dataset. Starting from an attentive encoderdecoder baseline, we first develop a model enhanced with two attention-based neural components, a static relational attention and a sequential multi-head attention, to address these two challenges, respectively. We further extend it by designing a dynamic relational attention module to combine the advantages of these two components, which finds the relationship between two images while decoding. The computation of dynamic relational attention is mathematically equivalent to attention over all visual “relationships”. Thus, our method provides a direct way to model visual relationships in language. To show the effectiveness of our models, we evaluate them on three datasets: our new dataset, the ”Spot-the-Diff” dataset (Jhamtani and BergKirkpatrick, 2018), and the two-image visual reasoning NLVR2 dataset (Suhr et al., 2019) (adapted for our task). We train models separately on each dataset with the same hyper-parameters and evaluate them on the same test set across all methods. Experimental results demonstrate that our model outperforms all the baselines and existing methods. The main contributions of our paper are: (1) We create a novel human language guided image editing dataset to boost the study in describing visual relationships; (2) We design novel relationalspeaker models, including a dynamic relational attention module, to handle the problem of twoimage captioning by focusing on all their visual relationships; (3) Our method is evaluated on several datasets and achieves the state-of-the-art. 2 Datasets We present the collection process and statistics of our Image Editing Request dataset and briefly introduce two public datasets (viz., Spot-the-Diff and NLVR2). All three datasets are used to study the task of two-image captioning and evaluating our relational-speaker models. Examples from these three datasets are shown in Fig. 2. 2.1 Image Editing Request Dataset Each instance in our dataset consists of an image pair (i.e., a source image and a target image) and a corresponding editing instruction which correctly and comprehensively describes the transformation from the source image to the target image. Our collected Image Editing Request dataset will be publicly released along with the scripts to unify it with the other two datasets. 2.1.1 Collection Process To create a high-quality, diverse dataset, we follow a three-step pipeline: image pairs collection, editing instructions annotation, and post-processing by experts (i.e., cleaning and test set annotations labeling). Images Pairs Collection We first crawl the editing image pairs (i.e., a source image and a target image) from posts on Reddit (Photoshop request subreddit)3 and Zhopped4. Posts generally start with an original image and an editing specification. Other users would send their modified images by replying to the posts. We collect original images and modified images as source images and target images, respectively. Editing Instruction Annotation The texts in the original Reddit and Zhopped posts are too noisy to be used as image editing instructions. To address this problem, we collect the image editing instructions on MTurk using an interactive interface that allows the MTurk annotators to either write an image editing instruction corresponding to a displayed image pair, or flag it as invalid (e.g., if the two images have nothing in common). 3https://www.reddit.com/r/photoshoprequest 4http://zhopped.com 1875 Convert Add a sword and a cloak to the squirrel. The blue truck is no longer there. A car is approaching the parking lot from the right. Ours (Image Editing Request) Spot-the-Diff NLVR2 Captioning NLVR2 Classification Each image shows a row of dressed dogs posing with a cat that is also wearing some garment. In at least one of the images, six dogs are posing for a picture, while on a bench. Each image shows a row of dressed dogs posing with a cat that is also wearing some garment. True False Figure 2: Examples from three datasets: our Image Editing Request, Spot-the-Diff, and NLVR2. Each example involves two natural images and an associated sentence describing their relationship. The task of generating NLVR2 captions is converted from its original classification task. B-1 B-2 B-3 B-4 Rouge-L Ours 52 34 21 13 45 Spot-the-Diff 41 25 15 8 31 MS COCO 38 22 15 8 34 Table 1: Human agreement on our datasets, compared with Spot-the-Diff and MS COCO (captions=3). B-1 to B-4 are BLEU-1 to BLEU-4. Our dataset has the highest human agreement. Post-Processing by Experts Mturk annotators are not always experts in image editing. To ensure the quality of the dataset, we hire an image editing expert to label each image editing instruction of the dataset as one of the following four options: 1. correct instruction, 2. incomplete instruction, 3. implicit request, 4. other type of errors. Only the data instances labeled with “correct instruction” are selected to compose our dataset, and are used in training or evaluating our neural speaker model. Moreover, two additional experts are required to write two more editing instructions (one instruction per expert) for each image pair in the validation and test sets. This process enables the dataset to be a multi-reference one, which allows various automatic evaluation metrics, such as BLEU, CIDEr, and ROUGE to more accurately evaluate the quality of generated sentences. 2.1.2 Dataset Statistics The Image Editing Request dataset that we have collected and annotated currently contains 3,939 image pairs (3061 in training, 383 in validation, 495 in test) with 5,695 human-annotated instructions in total. Each image pair in the training set has one instruction, and each image pair in the validation and test sets has three instructions, written by three different annotators. Instructions have an average length of 7.5 words (standard deviation: 4.8). After removing the words with less than three occurrences, the dataset has a vocabulary of 786 words. The human agreement of our dataset is shown in Table 1. The word frequencies in our dataset are visualized in Fig. 3. Most of the images in our dataset are realistic. Since the task is image editing, target images may have some artifacts (see Image Editing Request examples in Fig. 2 and Fig. 5). 2.2 Existing Public Datasets To show the generalization of our speaker model, we also train and evaluate our model on two public datasets, Spot-the-Diff (Jhamtani and BergKirkpatrick, 2018) and NLVR2 (Suhr et al., 2019). Instances in these two datasets are each composed of two natural images and a human written sentence describing the relationship between the two 1876 3/4/2019 wordcloud.svg file:///Users/hatan/Desktop/wordcloud.svg 1/1 image remove background add change make increase picture white crop color black contrast photo entire brightness face whole blue text replace man brighten red behind put right zoom head people colors filterlighten darken hair girl yellow around take rotate left light green brighter woman effect insert turn eyes top sky side area decrease hand blur dog darker frame man's bottom car skin baby pink logo sharpen sign person girl's new hat delete front corner glare word one glasses lighter needs two little place water boy give back onto resize cut smaller away reflection everything purple move slightly distort girls shirt edit look guy orange bigger brown mans significantly lines tone cat photoshop please wall beach solid line saturation border words dogs just eye adjust select shadow different enlarge clouds dark grey body flip finger nose tree like lighting degrees middle gray circle leash part colorful tattoo except closer faces boy's less fire can bit bottle chair glass size clockwise cartoon center larger child lot sharpness removed blurry tint sun use exposure flowers vibrant instead outline show sunglasses letters reduce grass santa clear moon boys fill arm hue woman's forest shrink camera bright ground trump first erase star flag stains apply tiger flame spots cats guys boat saturated sharper t­shirt clearer womans donald couple window zombie figure smoke table hands scene signs heart rest old building saturate persons shadows sunset centre mouth wires space heads tool lady half motorcycle photograph lettering portrait number object smooth mirror wheels turned paint floor areas lower focus much want neck cap gun bag fix get see colorize subjects mountain enhance writing effects lights collar clone fence fish rims glow bike big billboard character balloons graffiti gradient mustache borders cropped clothes inside flower coming edges trees raise match torch suit next beer door day kid rid imperfections straighten landscape position wrinkles subject visible stripes counter switch cracks beard ocean parts youre going house cover arms city ball says bar say yellowness highlight intensity scratches cigarette standing straight children clarity overall guitars colored corners rainbow bubbles correct graphic cooler design second extend layer looks dress bread leave paste sepia plain horse three write paper tiki sand gold four coin font dots box horizontal headphones completely christmas blemishes lightning coloring together pictures isolate objects creases monster vehicle looking portion sitting cloudy batman pencil invert cowboy pupils wooden meadow sketch covers leaves clean shark sides style shape birds babys happy boots bring field claus need legs view loaf mask lens tan paw tv desaturate backround grayscale eliminate president vertical eyebrows forehead vibrance drawing holding shorten another clinton natural overlay unknown images baby's helmet whiter street chairs warmer better blonde across silver wings shade thing stray stain crown cross cat's night marks small cable bokeh blood gonna feet fall long lion frog draw tilt game snow lips pole card fold cars halo hes www.maxim.com microphone highlights minecraft butterfly circular american necklace birthday ponytail balance stretch peoples quality blanket outside red­eye player create crease throne appear square higher strong poster height screen symbol family letter guitar statue facing whiten longer third blend chest mario cream hairs anime saber boost touch sword ghost stars stamp every brick board shoes truck mommy flare tears dog's beam neon time palm lake auto pool bald road name fade date lamp men ice fit tag reflections orientation foreground cartoonish television basketball vertically mountains brightess explosion hairstyle rectangle original graphics sandwich godzilla triangle drawings painting dinosaur removing separate entirely outlined shoulder numbers figures restore wearing america hillary focused gorilla wording circled pattern setting zombies animals redness texture showing section thicker emblem planet makeup simply bronze smiley ninety redeye ladies making flames expand upside levels photos trumps desert within outfit along jesus extra large swirl thats party bluer drive plate tubes bunch teeth makes piece empty grand trash cloud rocks smile cigar beams stone clown theft crowd month bride bunny chain paws wars mark dive haze kids spot iron wood acne army bird cans will made jack roof also full swap duck pony puss Figure 3: Word cloud showing the vocabulary frequencies of our Image Editing Request dataset. images. To the best of our knowledge, these are the only two public datasets with a reasonable amount of data that are suitable for our task. We next briefly introduce these two datasets. Spot-the-Diff This dataset is designed to help generate a set of instructions that can comprehensively describe all visual differences. Thus, the dataset contains images from video-surveillance footage, in which differences can be easily found. This is because all the differences could be effectively captured by subtractions between two images, as shown in Fig. 2. The dataset contains 13,192 image pairs, and an average of 1.86 captions are collected for each image pair. The dataset is split into training, validation, and test sets with a ratio of 8:1:1. NLVR2 The original task of Cornell Natural Language for Visual Reasoning (NLVR2) dataset is visual sentence classification, see Fig. 2 for an example. Given two related images and a natural language statement as inputs, a learned model needs to determine whether the statement correctly describes the visual contents. We convert this classification task to a generation task by taking only the image pairs with correct descriptions. After conversion, the amount of data is 51,020, which is almost half of the original dataset with a size of 107,296. We also preserve the training, validation, and test split in the original dataset. 3 Relational Speaker Models In this section, we aim to design a general speaker model that describes the relationship between two images. Due to the different kinds of visual relationships, the meanings of images vary in different tasks: “before” and “after” in Spot-the-Diff, “left” and “right” in NLVR2, “source” and “target” in our Image Editing Request dataset. We use the nomenclature of “source” and “target” for simplification, but our model is general and not designed for any specific dataset. Formally, the model generates a sentence {w1, w2, ..., wT } describing the relationship between the source image I SRC and the target image I TRG. {wt}T t=1 are the word tokens with a total length of T. I SRC and I TRG are natural images in their raw RGB pixels. In the rest of this section, we first introduce our basic attentive encoder-decoder model, and show how we gradually improve it to fit the task better. 3.1 Basic Model Our basic model (Fig. 4(a)) is similar to the baseline model in Jhamtani and Berg-Kirkpatrick (2018), which is adapted from the attentive encoder-decoder model for single image captioning (Xu et al., 2015). We use ResNet-101 (He et al., 2016) as the feature extractor to encode the source image I SRC and the target image I TRG. The feature maps of size N × N × 2048 are extracted, where N is the height or width of the feature map. Each feature in the feature map represents a part of the image. Feature maps are then flattened to two N2 × 2048 feature sequences f SRC and f TRG, which are further concatenated to a single feature sequence f. f SRC = ResNet (I SRC) (1) f TRG = ResNet (I TRG) (2) f =  f SRC 1 , . . . , f SRC N2 , f TRG 1 , , . . . , f TRG N2  (3) At each decoding step t, the LSTM cell takes the embedding of the previous word wt−1 as an input. The word wt−1 either comes from the ground truth (in training) or takes the token with maximal probability (in evaluating). The attention module then attends to the feature sequence f with the hidden output ht as a query. Inside the attention module, it first computes the alignment scores αt,i between the query ht and each fi. Next, the feature sequence f is aggregated with a weighted average (with a weight of α) to form the image context ˆf. Lastly, the context ˆft and the hidden vector ht are merged into an attentive hidden vector ˆht with a 1877 LSTM LSTM Alignment LSTM LSTM Multi-Heads Att LSTM Att Module LSTM (a) Basic Model (b) Multi-Head Attention (d) Dynamic Relational Attention (c) Static Relational Attention LSTM LSTM Alignment Reduction Att Module Att Module Att Module ht wt wt−1 p(wt) Figure 4: The evolution diagram of our models to describe the visual relationships. One decoding step at t is shown. The linear layers are omitted for clarity. The basic model (a) is an attentive encoder-decoder model, which is enhanced by the multi-head attention (b) and static relational attention (c). Our best model (d) dynamically computes the relational scores in decoding to avoid losing relationship information. fully-connected layer: ˜wt−1 = embedding (wt−1) (4) ht, ct = LSTM ( ˜wt−1, ht−1, ct−1) (5) αt,i = softmaxi  h⊤ t WIMGfi  (6) ˆft = X i αt,ifi (7) ˆht = tanh(W1[ ˆft; ht] + b1) (8) The probability of generating the k-th word token at time step t is softmax over a linear transformation of the attentive hidden ˆht. The loss Lt is the negative log likelihood of the ground truth word token w∗ t : pt(wt,k) = softmaxk  WW ˆht + bW  (9) Lt = −log pt(w∗ t ) (10) 3.2 Sequential Multi-Head Attention One weakness of the basic model is that the plain attention module simply takes the concatenated image feature f as the input, which does not differentiate between the two images. We thus consider applying a multi-head attention module (Vaswani et al., 2017) to handle this (Fig. 4(b)). Instead of using the simultaneous multi-head attention 5 in Transformer (Vaswani et al., 2017), we implement the multi-head attention in a sequential way. This way, when the model is attending to the target image, the contextual information retrieved from the source image is available and can therefore perform better at differentiation or relationship learning. In detail, the source attention head first attends to the flattened source image feature f SRC. The attention module is built in the same way as in Sec. 3.1, except that it now only attends to the source image: αSRC t,i = softmaxi(h⊤ t WSRCf SRC i ) (11) ˆf SRC t = X i αSRC t,i f SRC i (12) ˆhSRC t = tanh(W2[ ˆf SRC t ; ht] + b2) (13) The target attention head then takes the output of the source attention ˆhSRC t as a query to retrieve appropriate information from the target fea5We also tried the original multi-head attention but it is empirically weaker than our sequential multi-head attention. 1878 ture f TRG: αTRG t,j = softmaxj(ˆhSRC⊤ t WTRGf TRG j ) (14) ˆf TRG t = X j αTRG t,j f TRG j (15) ˆhTRG t = tanh(W3[ ˆf TRG t ; ˆhSRC t ] + b3) (16) In place of ˆht, the output of the target head ˆhTRG t is used to predict the next word.6 3.3 Static Relational Attention Although the sequential multi-head attention model can learn to differentiate the two images, visual relationships are not explicitly examined. We thus allow the model to statically (i.e., not in decoding) compute the relational score between source and target feature sequences and reduce the scores into two relationship-aware feature sequences. We apply a bi-directional relational attention (Fig. 4(c)) for this purpose: one from the source to the target, and one from the target to the source. For each feature in the source feature sequence, the source-to-target attention computes its alignment with the features in the target feature sequences. The source feature, the attended target feature, and the difference between them are then merged together with a fully-connected layer: αS→T i,j = softmaxj((WSf SRC i )⊤(WTf TRG j )) (17) ˆf S→T i = X j αS→T i,j f TRG j (18) ˆf S i = tanh(W4[f SRC i ; ˆf S→T i ] + b4) (19) We decompose the attention weight into two small matrices WS and WT so as to reduce the number of parameters, because the dimension of the image feature is usually large. The target-to-source cross-attention is built in an opposite way: it takes each target feature f TRG j as a query, attends to the source feature sequence, and get the attentive feature ˆf T j . We then use these two bidirectional attentive sequences ˆf S i and ˆf T j in the multi-head attention module (shown in previous subsection) at each decoding step. 3.4 Dynamic Relational Attention The static relational attention module compresses pairwise relationships (of size N4) into two 6We tried to exchange the order of two heads or have two orders concurrently. We didn’t see any significant difference in results between them. relationship-aware feature sequences (of size 2× N2). The compression saves computational resources but has potential drawback in information loss as discussed in Bahdanau et al. (2015) and Xu et al. (2015). In order to avoid losing information, we modify the static relational attention module to its dynamic version, which calculates the relational scores while decoding (Fig. 4(d)). At each decoding step t, the dynamic relational attention calculates the alignment score at,i,j between three vectors: a source feature f SRC i , a target feature f TRG j , and the hidden state ht. Since the dot-product used in previous attention modules does not have a direct extension for three vectors, we extend the dot product and use it to compute the three-vector alignment score. dot(x, y) = X d xd yd = x⊤y (20) dot∗(x, y, z) = X d xd ydzd = (x ⊙y)⊤z (21) at,i,j = dot∗(WSKf SRC i , WTKf TRG j , WHKht) (22) = (WSKf SRC i ⊙WTKf TRG j )⊤WHKht (23) where ⊙is the element-wise multiplication. The alignment scores (of size N4) are normalized by softmax. And the attention information is fused to the attentive hidden vector ˆf D t as previous. αt,i,j = softmaxi,j (at,i,j) (24) ˆf SRC-D t = X i,j αt,i,jf SRC i (25) ˆf TRG-D t = X i,j αt,i,jf TRG j (26) ˆf D t = tanh(W5[ ˆf SRC-D t ; ˆf TRG-D t ; ht]+b5) (27) = tanh(W5S ˆf SRC-D t + W5T ˆf TRG-D t + W5Hht + b5) (28) where W5S, W5T, W5H are sub-matrices of W5 and W5 = [W5S, W5T, W5H]. According to Eqn. 23 and Eqn. 28, we find an analog in conventional attention layers with following specifications: • Query: ht • Key: WSKf SRC i ⊙WTKf TRG j • Value: W5Sf SRC i + W5Tf TRG j The key WSKf SRC i ⊙WTKf TRG j and the value W5Sf SRC i +W5Tf TRG j can be considered as representations of the visual relationships between f SRC i 1879 Method BLEU-4 CIDEr METEOR ROUGE-L Our Dataset (Image Editing Request) basic model 5.04 21.58 11.58 34.66 +multi-head att 6.13 22.82 11.76 35.13 +static rel-att 5.76 20.70 12.59 35.46 -static +dynamic rel-att 6.72 26.36 12.80 37.25 Spot-the-Diff CAPT(Jhamtani and Berg-Kirkpatrick, 2018) 7.30 26.30 10.50 25.60 DDLA(Jhamtani and Berg-Kirkpatrick, 2018) 8.50 32.80 12.00 28.60 basic model 5.68 22.20 10.98 24.21 +multi-head att 7.52 31.39 11.64 26.96 +static rel-att 8.31 33.98 12.95 28.26 -static +dynamic rel-att 8.09 35.25 12.20 31.38 NLVR2 basic model 5.04 43.39 10.82 22.19 +multi-head att 5.11 44.80 10.72 22.60 +static rel-att 4.95 45.67 10.89 22.69 -static +dynamic rel-att 5.00 46.41 10.37 22.94 Table 2: Automatic metric of test results on three datasets. Best results of the main metric are marked in bold font. Our full model is the best on all three datasets with the main metric. and f TRG j . It is a direct attention to the visual relationship between the source and target images, hence is suitable for the task of generating relationship descriptions. 4 Results To evaluate the performance of our relational speaker models (Sec. 3), we trained them on all three datasets (Sec. 2). We evaluate our models based on both automatic metrics as well as pairwise human evaluation. We also show our generated examples for each dataset. 4.1 Experimental Setup We use the same hyperparameters when applying our model to the three datasets. Dimensions of hidden vectors are 512. The model is optimized by Adam with a learning rate of 1e −4. We add dropout layers of rate 0.5 everywhere to avoid over-fitting. When generating instructions for evaluation, we use maximum-decoding: the word wt generated at time step t is arg maxk p(wt,k). For the Spot-the-Diff dataset, we take the “Single sentence decoding” experiment as in Jhamtani and Berg-Kirkpatrick (2018). We also try to mix the three datasets but we do not see any improvement. We also try different ways to mix the three datasets but we do not see improvement. We first train a unified model on the union of these datasets. The metrics drop a lot because the tasks and language domains (e.g., the word dictionary and lengths of sentences) are different from each other. We next only share the visual components to overcome the disagreement in language. However, the image domain are still quite different from each other (as shown in Fig. 2). Thus, we finally separately train three models on the three datasets with minimal cross-dataset modifications. 4.2 Metric-Based Evaluation As shown in Table 2, we compare the performance of our models on all three datasets with various automated metrics. Results on the test sets are reported. Following the setup in Jhamtani and Berg-Kirkpatrick (2018), we takes CIDEr (Vedantam et al., 2015) as the main metric in evaluating the Spot-the-Diff and NLVR2 datasets. However, CIDEr is known as its problem in up-weighting unimportant details (Kilickaya et al., 2017; Liu et al., 2017b). In our dataset, we find that instructions generated from a small set of short phrases could get a high CIDEr score. We thus change the main metric of our dataset to METEOR (Banerjee and Lavie, 2005), which is manually verified to be aligned with human judgment on the validation set in our dataset. To avoid over-fitting, the model is 1880 Basic Full Both Good Both Not Ours(IEdit) 11 24 5 60 Spot-the-Diff 22 37 6 35 NLVR2 24 37 17 22 Table 3: Human evaluation on 100 examples. Image pair and two captions generated by our basic model and full model are shown to the user. The user chooses one from ‘Basic’ model wins, ‘Full’ model wins, ‘Both Good’, or ‘Both Not’. Better model marked in bold font. early-stopped based on the main metric on validation set. We also report the BLEU-4 (Papineni et al., 2002) and ROUGE-L (Lin, 2004) scores. The results on various datasets shows the gradual improvement made by our novel neural components, which are designed to better describe the relationship between 2 images. Our full model has a significant improvement in result over baseline. The improvement on the NLVR2 dataset is limited because the comparison of two images was not forced to be considered when generating instructions. 4.3 Human Evaluation and Qualitative Analysis We conduct a pairwise human evaluation on our generated sentences, which is used in Celikyilmaz et al. (2018) and Pasunuru and Bansal (2017). Agarwala (2018) also shows that the pairwise comparison is better than scoring sentences individually. We randomly select 100 examples from the test set in each dataset and generate captions via our full speaker model. We ask users to choose a better instruction between the captions generated by our full model and the basic model, or alternatively indicate that the two captions are equal in quality. The Image Editing Request dataset is specifically annotated by the image editing expert. The winning rate of our full model (dynamic relation attention) versus the basic model is shown in Table 3. Our full model outperforms the basic model significantly. We also show positive and negative examples generated by our full model in Fig. 5. In our Image Editing Request corpus, the model was able to detect and describe the editing actions but it failed in handling the arbitrary complex editing actions. We keep these hard examples in our dataset to match real-world requirements and allow follow-up future works to pursue the remaining challenges in this task. Our model is designed for non-localized relationships thus we do not explicitly model the pixel-level differences; however, we still find that the model could learn these differences in the Spot-the-Diff dataset. Since the descriptions in Spot-the-Diff is relatively simple, the errors mostly come from wrong entities or undetected differences as shown in Fig. 5. Our model is also sensitive to the image contents as shown in the NLVR2 dataset. 5 Related Work In order to learn a robust captioning system, public datasets have been released for diverse tasks including single image captioning (Lin et al., 2014; Plummer et al., 2015; Krishna et al., 2017), video captioning (Xu et al., 2016), referring expressions (Kazemzadeh et al., 2014; Mao et al., 2016), and visual question answering (Antol et al., 2015; Zhu et al., 2016; Johnson et al., 2017). In terms of model progress, recent years witnessed strong research progress in generating natural language sentences to describe visual contents, such as Vinyals et al. (2015); Xu et al. (2015); Ranzato et al. (2016); Anderson et al. (2018) in single image captioning, Venugopalan et al. (2015); Pan et al. (2016); Pasunuru and Bansal (2017) in video captioning, Mao et al. (2016); Liu et al. (2017a); Yu et al. (2017); Luo and Shakhnarovich (2017) in referring expressions, Jain et al. (2017); Li et al. (2018); Misra et al. (2018) in visual question generation, and Andreas and Klein (2016); CohnGordon et al. (2018); Luo et al. (2018); Vedantam et al. (2017) in other setups. Single image captioning is the most relevant problem to the two-images captioning. Vinyals et al. (2015) created a powerful encoder-decoder (i.e., CNN to LSTM) framework in solving the captioning problem. Xu et al. (2015) further equipped it with an attention module to handle the memorylessness of fixed-size vectors. Ranzato et al. (2016) used reinforcement learning to eliminate exposure bias. Recently, Anderson et al. (2018) brought the information from object detection system to further boost the performance. Our model is built based on the attentive encoder-decoder model (Xu et al., 2015), which is the same choice in Jhamtani and Berg-Kirkpatrick (2018). We apply the RL training with selfcritical (Rennie et al., 2017) but do not see significant improvement, possibly because of the relatively small data amount compared to MS COCO. We also observe that the detection system in An1881 add a filter to the image change the background to blue Positive Examples Negative Examples Image Editing Request Spot-the-Diff NLVR2 there is a bookshelf with a white shelf in one of the images . the left image shows a pair of shoes wearing a pair of shoes . the person in the white shirt is gone the black car in the middle row is gone Figure 5: Examples of positive and negative results of our model from the three datasets. Selfies are blurred. derson et al. (2018) has a high probability to fail in the three datasets, e.g., the detection system can not detect the small cars and people in spot-thediff dataset. The DDLA (Difference Description with Latent Alignment) method proposed in Jhamtani and Berg-Kirkpatrick (2018) learns the alignment between descriptions and visual differences. It relies on the nature of the particular dataset and thus could not be easily transferred to other dataset where the visual relationship is not obvious. The two-images captioning could also be considered as a two key-frames video captioning problem, and our sequential multi-heads attention is a modified version of the seq-to-seq model (Venugopalan et al., 2015). Some existing work (Chen et al., 2018; Wang et al., 2018; Manjunatha et al., 2018) also learns how to modify images. These datasets and methods focus on the image colorization and adjustment tasks, while our dataset aims to study the general image editing request task. 6 Conclusion In this paper, we explored the task of describing the visual relationship between two images. We collected the Image Editing Request dataset, which contains image pairs and human annotated editing instructions. We designed novel relational speaker models and evaluate them on our collected and other public existing dataset. Based on automatic and human evaluations, our relational speaker model improves the ability to capture visual relationships. For future work, we are going to further explore the possibility to merge the three datasets by either learning a joint image representation or by transferring domain-specific knowledge. We are also aiming to enlarge our Image Editing Request dataset with newly-released posts on Reddit and Zhopped. Acknowledgments We thank the reviewers for their helpful comments and Nham Le for helping with the initial data collection. This work was supported by Adobe, ARO-YIP Award #W911NF-18-1-0336, and faculty awards from Google, Facebook, and Salesforce. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency. References Aseem Agarwala. 2018. Automatic photography with google clips. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6077–6086. Jacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1173–1182. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425–2433. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings 1882 of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72. Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1662–1675. Jianbo Chen, Yelong Shen, Jianfeng Gao, Jingjing Liu, and Xiaodong Liu. 2018. Language-based image editing with recurrent attentive models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8721–8729. Reuben Cohn-Gordon, Noah Goodman, and Christopher Potts. 2018. Pragmatically informative image captioning with character-level inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 439–443. Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. 2010. Every picture tells a story: Generating sentences from images. In European conference on computer vision, pages 15–29. Springer. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Unnat Jain, Ziyu Zhang, and Alexander G Schwing. 2017. Creativity: Generating diverse questions using variational autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6485–6494. Harsh Jhamtani and Taylor Berg-Kirkpatrick. 2018. Learning to describe differences between pairs of similar images. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4024–4034. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2901–2910. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. Referitgame: Referring to objects in photographs of natural scenes. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 787–798. Mert Kilickaya, Aykut Erdem, Nazli Ikizler-Cinbis, and Erkut Erdem. 2017. Re-evaluating automatic metrics for image captioning. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 199–209. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73. Girish Kulkarni, Visruth Premraj, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C Berg, and Tamara L Berg. 2011. Baby talk: Understanding and generating image descriptions. In Proceedings of the 24th CVPR. Citeseer. Yikang Li, Nan Duan, Bolei Zhou, Xiao Chu, Wanli Ouyang, Xiaogang Wang, and Ming Zhou. 2018. Visual question generation as dual task of visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6116–6124. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer. Jingyu Liu, Liang Wang, and Ming-Hsuan Yang. 2017a. Referring expression generation and comprehension via attributes. In Proceedings of the IEEE International Conference on Computer Vision, pages 4856–4864. Siqi Liu, Zhenhai Zhu, Ning Ye, Sergio Guadarrama, and Kevin Murphy. 2017b. Improved image captioning via policy gradient optimization of spider. In Proceedings of the IEEE international conference on computer vision, pages 873–881. Ruotian Luo, Brian Price, Scott Cohen, and Gregory Shakhnarovich. 2018. Discriminability objective for training descriptive captions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6964–6974. Ruotian Luo and Gregory Shakhnarovich. 2017. Comprehension-guided referring expressions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7102–7111. Varun Manjunatha, Mohit Iyyer, Jordan Boyd-Graber, and Larry Davis. 2018. Learning to color from language. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 764–769. 1883 Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous object descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 11–20. Ishan Misra, Ross Girshick, Rob Fergus, Martial Hebert, Abhinav Gupta, and Laurens van der Maaten. 2018. Learning by asking questions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11–20. Yasuhide Mori, Hironobu Takahashi, and Ryuichi Oka. 1999. Image-to-word transformation based on dividing and vector quantizing images with words. In First International Workshop on Multimedia Intelligent Storage and Retrieval Management, pages 1–9. Citeseer. Pingbo Pan, Zhongwen Xu, Yi Yang, Fei Wu, and Yueting Zhuang. 2016. Hierarchical recurrent neural encoder for video representation with application to captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1029–1038. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Ramakanth Pasunuru and Mohit Bansal. 2017. Reinforced video captioning with entailment rewards. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 979–985. Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641–2649. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In International Conference on Learning Representations. Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7008–7024. Alane Suhr, Stephanie Zhou, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in photographs. In Proceedings of the 57th annual meeting on association for computational linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, Devi Parikh, and Gal Chechik. 2017. Context-aware captions from context-agnostic supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 251–260. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566–4575. Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2015. Sequence to sequence-video to text. In Proceedings of the IEEE international conference on computer vision, pages 4534–4542. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3156–3164. Hai Wang, Jason D Williams, and SingBing Kang. 2018. Learning to globally edit images with textual description. arXiv preprint arXiv:1810.05786. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. Msrvtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5288–5296. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pages 2048–2057. Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L Berg. 2017. A joint speaker-listener-reinforcer model for referring expressions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7282–7290. Yuke Zhu, Oliver Groth, Michael Bernstein, and Li FeiFei. 2016. Visual7w: Grounded question answering in images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4995–5004.
2019
182
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1884–1894 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1884 Weakly-Supervised Spatio-Temporally Grounding Natural Sentence in Video Zhenfang Chen1∗Lin Ma2† Wenhan Luo2† Kwan-Yee K. Wong1 1The University of Hong Kong 2Tencent AI Lab {zfchen, kykwong}@cs.hku.hk {forest.linma, whluo.china}@gmail.com Abstract In this paper, we address a novel task, namely weakly-supervised spatio-temporally grounding natural sentence in video. Specifically, given a natural sentence and a video, we localize a spatio-temporal tube in the video that semantically corresponds to the given sentence, with no reliance on any spatio-temporal annotations during training. First, a set of spatiotemporal tubes, referred to as instances, are extracted from the video. We then encode these instances and the sentence using our proposed attentive interactor which can exploit their fine-grained relationships to characterize their matching behaviors. Besides a ranking loss, a novel diversity loss is introduced to train the proposed attentive interactor to strengthen the matching behaviors of reliable instance-sentence pairs and penalize the unreliable ones. Moreover, we also contribute a dataset, called VID-sentence, based on the ImageNet video object detection dataset, to serve as a benchmark for our task. Extensive experimental results demonstrate the superiority of our model over the baseline approaches. Our code and the constructed VID-sentence dataset are available at: https://github.com/ JeffCHEN2017/WSSTG.git. 1 Introduction Given an image/video and a language query, image/video grounding aims to localize a spatial region in the image (Plummer et al., 2015; Yu et al., 2017, 2018) or a specific frame in the video (Zhou et al., 2018) which semantically corresponds to the language query. Grounding has broad applications, such as text based image retrieval (Chen et al., 2017; Ma et al., 2015), description generation (Wang et al., 2018a; Rohrbach et al., 2017; ∗Work done while Zhenfang Chen was a Research Intern with Tencent AI Lab. † Corresponding authors. A brown and white dog is lying on the grass and then it stands up. ... ... Figure 1: The proposed WSSTG task aims to localize a spatio-temporal tube (i.e., the sequence of green bounding boxes) in the video which semantically corresponds to the given sentence, with no reliance on any spatio-temporal annotations during training. Wang et al., 2018b), and question answer (Gao et al., 2018; Ma et al., 2016). Recently, promising progress has been made in image grounding (Yu et al., 2018; Chen et al., 2018c; Zhang et al., 2018) which heavily relies on fine-grained annotations in the form of region-sentence pairs. Fine-grained annotations for video grounding are more complicated and labor-intensive as one may need to annotate a spatio-temporal tube (i.e., label the spatial region in each frame) in a video which semantically corresponds to one language query. To avoid the intensive labor involved in dense annotations, (Huang et al., 2018) and (Zhou et al., 2018) considered the problem of weaklysupervised video grounding where only aligned video-sentence pairs are provided without any fine-grained regional annotations. However, they both ground only a noun or pronoun in a static frame of the video. As illustrated in Fig. 1, it is difficult to distinguish the target dog (denoted by the green box) from other dogs (denoted by the red boxes) if we attempt to ground only the noun “dog” in one single frame of the video. The main reason is that the textual description of “dog” is not sufficiently expressive and the visual appearance in one single frame cannot characterize the spatio-temporal dynamics (e.g., the action and movements of the “dog”). 1885 In this paper, we introduce a novel task, referred to as weakly-supervised spatio-temporally grounding sentence in video (WSSTG). Specifically, given a natural sentence and a video, we aim to localize a spatio-temporal tube (i.e., a sequence of bounding boxes), referred to as an instance, in the video which semantically matches the given sentence (see Fig. 1). During training, we do not rely on any fine-grained regional annotations. Compared with existing weaklysupervised video grounding problems (Zhou et al., 2018; Huang et al., 2018), our proposed WSSTG task has the following two advantages and challenges. First, we aim to ground a natural sentence instead of just a noun or pronoun, which is more comprehensive and flexible. As illustrated in Fig. 1, with a detailed description like “lying on the grass and then it stands up”, the target dog (denoted by green boxes) can be localized without ambiguity. However, how to comprehensively capture the semantic meaning of a sentence and ground it in a video, especially in a weaklysupervised manner, poses a challenge. Second, compared with one bounding box in a static frame, a spatio-temporal tube (denoted by a sequence of green bounding boxes in Fig. 1) presents the temporal movements of “dog”, which can characterize its visual dynamics and thereby semantically match the given sentence. However, how to exploit and model the spatio-temporal characteristics of the tubes as well as their complicated relationships with the sentence poses another challenge. To handle the above challenges, we propose a novel model realized within the multiple instance learning framework (Karpathy and Fei-Fei, 2015; Tang et al., 2017, 2018). First, a set of instance proposals are extracted from a given video. Features of the instance proposals and the sentence are then encoded by a novel attentive interactor that exploits their fine-grained relationships to generate semantic matching behaviors. Finally, we propose a diversity loss, together with a ranking loss, to train the whole model. During testing, the instance proposal which exhibits the strongest semantic matching behavior with the given sentence is selected as the grounding result. To facilitate our proposed WSSTG task, we contribute a new grounding dataset, called VIDsentence, by providing sentence descriptions for the instances of the ImageNet video object detection dataset (VID) (Russakovsky et al., 2015). Specifically, 7, 654 instances of 30 categories from 4, 381 videos in VID are extracted. For each instance, annotators are asked to provide a natural sentence describing its content. Please refer to Sec. 4 for more details about the dataset. Our main contributions can be summarized as follows. 1) We tackle a novel task, namely weakly-supervised spatio-temporally video grounding (WSSTG), which localizes a spatiotemporal tube in a given video that semantically corresponds to a given natural sentence, in a weakly-supervised manner. 2) We propose a novel attentive interactor to exploit fine-grained relationships between instances and the sentence to characterize their matching behaviors. A diversity loss is proposed to strengthen the matching behaviors between reliable instance-sentence pairs and penalize the unreliable ones during training. 3) We contribute a new dataset, named as VID-sentence, to serve as a benchmark for the novel WSSTG task. 4) Extensive experimental results are analyzed, which illustrate the superiority of our proposed method. 2 Related Work Grounding in Images/Videos. Grounding in images has been popular in the research community over the past decade (Kong et al., 2014; Matuszek et al., 2012; Hu et al., 2016; Wang et al., 2016a,b; Li et al., 2017; Cirik et al., 2018; Sadeghi and Farhadi, 2011; Zhang et al., 2017; Xiao et al., 2017; Chen et al., 2019, 2018a). In recent years, researchers also explore grounding in videos. Yu and Siskind (2015) grounded objects in constrained videos by leveraging weak semantic constraints implied by a sequence of sentences. Vasudevan et al. (2018) grounded objects in the last frame of stereo videos with the help of text, motion cues, human gazes and spatial-temporal context. However, fully supervised grounding requires intensive labor for regional annotations, especially in the case of videos. Weakly-Supervised Grounding. To avoid the intensive labor involved in regional annotations, weakly-supervised grounding has been proposed where only image-sentence or videosentence pairs are needed. It was first studied in the image domain (Zhao et al., 2018; Rohrbach et al., 2016). Later, given a sequence of transcriptions and their corresponding video clips as well as their temporal alignment, Huang et al. (2018) 1886 Instance Generator Time a brown squirrel is playing with a blue ball on the floor Input Sentence Int Int Int Instances Attentive Interactor Input Video Loss Figure 2: The architecture of our model. An instance generator is used to produce spatio-temporal instances. An attentive interactor is proposed to exploit the complicated relationships between instances and the sentence. Multiple instance learning is used to train the model with a ranking loss and a diversity loss. grounded nouns/pronouns in specific frames by constructing a visual grounded action graph. The work closest to ours is (Zhou et al., 2018), in which the authors grounded a noun in a specific frame by considering object interactions and loss weighting given one video and one text input. In this work, we also focus on grounding in a videotext pair. However, different from (Zhou et al., 2018) whose text input consists of nouns/pronouns and output is a bounding box in a specific frame, we aim to ground a natural sentence and output a spatio-temporal tube in the video. 3 Method Given a natural sentence query q and a video v, our proposed WSSTG task aims to localize a spatio-temporal tube, referred to as an instance, p = {bt}T t=1 in the video sequence, where bt represents a bounding box in the t-th frame and T denotes the total number of frames. The localized instance should semantically correspond to the sentence query q. As WSSTG is carried out in a weakly-supervised manner, only aligned videosentence pairs {v, q} are available with no finegrained regional annotations during training. In this paper, we cast the WSSTG task as a multiple instance learning problem (Karpathy and FeiFei, 2015). Given a video v, we first generate a set of instance proposals by an instance generator (Gkioxari and Malik, 2015). We then identify which instance semantically matches the natural sentence query q. We propose a novel model for handling the WSSTG task. It consists of two components, namely an instance generator and an attentive interactor (see Fig. 2). The instance generator links bounding boxes detected in each frame into instance proposals (see Sec. 3.1). The attentive interactor exploits the complicated relationships between instance proposals and the given sentence to yield their matching scores (see Sec. 3.2). The proposed model is optimized with a ranking loss Lrank and a novel diversity loss Ldiv (see Sec. 3.3). Specifically, Lrank aims to distinguish aligned video-sentence pairs from the unaligned ones, while Ldiv targets strengthening the matching behaviors between reliable instance-sentence pairs and penalizing the unreliable ones from the aligned video-sentence pairs. 3.1 Instance Extraction Instance Generation. As shown in Fig. 2, the first step of our method is to generate instance proposals. Similar to (Zhou et al., 2018), the region proposal network from Faster-RCNN (Ren et al., 2015) is used to detect frame-level bounding boxes with corresponding confidence scores, which are then linked to produce spatio-temporal tubes. Let bt denote a detected bounding box at time t and bt+1 denote another box at time t + 1. Following (Gkioxari and Malik, 2015), we define the linking score sl between bt and bt+1 as sl(bt, bt+1) = sc(bt) + sc(bt+1) + λ · IoU(bt, bt+1), (1) where sc(b) is the confidence score of b, IoU(bt, bt+1) is the intersection-over-union (IoU) of bt and bt+1, and λ is a balancing scalar which is set to 0.2 in our implementation. As such, one instance proposal pn can be viewed as a path {bn t }T t=1 over the whole video sequence with energy E(pn) given by E(pn) = 1 T −1 T −1 X t=1 sl(bn t , bn t+1). (2) We identify the instance proposal with the maximal energy by the Viterbi algorithm (Gkioxari and Malik, 2015). We keep the identified instance proposal and remove all the bounding boxes associated with it. We then repeat the above process until there is no bounding box left. This results in a set of instance proposals P = {pn}N n=1, with N being the total number of proposals. Feature Representation. Since an instance proposal consists of bounding boxes in consecutive video frames, we use I3D (Carreira and 1887 A LSTM ... LSTM ... A A + s(q,p) Matching behavior Characterization Interaction ... ... Figure 3: The architecture of the attentive interactor. It consists of two components, namely interaction and matching behavior characterization. A ⃝denotes the attention mechanism in Eqs. (4-6). φ⃝denotes the function in Eq. (7). Zisserman, 2017) and Faster-RCNN to generate the RGB sequence feature I3D-RGB, the flow sequence feature I3D-Flow, and the frame-level RoI pooled feature, respectively. Note that it is not effective to encode each bounding box as an instance proposal may include thousands of bounding boxes. We therefore evenly divide each instance proposal into tp segments and average the features within each segment. tp is set to 20 for all our experiments. We concatenate all three kinds of visual features before feeding it into the following attentive interactor. Taking each segment as a time step, each proposal p is thereby represented as Fp ∈Rtp×dp, a sequence of dp dimensional concatenated visual features at each step. 3.2 Attentive Interactor With the instance proposals from the video and the given sentence query, we propose a novel attentive interactor to characterize the matching behaviors between each proposal and the sentence query. Our attentive interactor consists of two coupling components, namely interaction and matching behavior characterization (see Fig. 3). Before diving into the details of the interactor, we first introduce the representation of the query sentence q. We represent each word in q using the 300-dimensional word2vec (Mikolov et al., 2013) and omit words that are not in the dictionary. In this way, each sentence q is represented as Fq ∈ Rtq×dq, where tq is the total number of words in the sentence and dq denotes the dimension of the word embedding. 3.2.1 Interaction Given the sequential visual features Fp ∈Rtp×dp of one candidate instance and the sequential textual features Fq ∈Rtq×dq of the query sentence, we propose an interaction module to exploit their complicated matching behaviors in a finegrained manner. First, two long short-term memory networks (LSTMs) (Hochreiter and Schmidhuber, 1997) are utilized to encode the instance proposal and sentence, respectively: hp t = LSTMp(f p t , hp t−1), hq t = LSTMq(f q t , hq t−1), (3) where fp t and fq t are the t-th row representations in Fp and Fq, respectively. Due to the natural characteristics of LSTM, hp t and hq t, as the yielded hidden states, encode and aggregate the contextual information from the sequential representation, and thereby yield more meaningful and informative visual features Hp = {hp t }tp t=1 and sentence representations Hq = {hq t}tq t=1. Different from (Rohrbach et al., 2016; Zhao et al., 2018) which used only the last hidden state hq tq as the feature embedding for the query sentence, we generate visually guided sentence features Hqp = {hqp t }tp t=1 by exploiting their fine-grained relationships based on Hq and Hp. Specifically, given the i-th visual feature hp i , an attention mechanism (Xu et al., 2015) is used to adaptively summarize Hq = {hq t}tq t=1 with respect to hp i : ei,j = wT tanh (Wqhq j + Wphp i + b1) + b2, (4) ai,j = exp(ei,j) Ptq j′=1 exp(ei,j′) , (5) hqp i = tq X j=1 ai,jhq j, (6) where Wq ∈RK×Dq, Wp ∈RK×Dp, b1 ∈ RK are the learnable parameters that map visual and sentence features to the same K-dimension space. w ∈RK and b2 ∈R work on the coupled textual and visual features and yield their affinity scores. With respect to Wphp i in Eq. (4), the generated visually guided sentence feature hqp i pays more attention on the words more correlated with hp i by adaptively summarizing Hq = {hq t}tq t=1. Owning to the attention mechanism in Eqs. (46), our proposed interaction module makes each 1888 visual feature interact with all the sentence features and attentively summarize them together. As such, fine-grained relationships between the visual and sentence representations are exploited. 3.2.2 Matching Behavior Characterization After obtaining a set of visually guided sentence features Hqp = {hqp t }tp t=1, we characterize the fine-grained matching behaviors between the visual and sentence features. Specifically, the matching behavior between the i-th visual and sentence features is defined as si(hp i , hqp i ) = φ(hp i , hqp i ). (7) The instantiation of φ can be realized by different approaches, such as multi-layer perceptron (MLP), inner-product, or cosine similarity. In this paper, we use cosine similarity between hp i and hqp i for simplicity. Finally, we define the matching behavior between an instance proposal p and the sentence q as s(q, p) = 1 tp tp X i=1 si(hp i , hqp i ). (8) 3.3 Training For the WSSTG task, since no regional annotations are available during the training, we cannot optimize the framework in a fully supervised manner. We, therefore, resort to MIL to optimize the proposed network based on the obtained matching behaviors of the instance-sentence pairs. Specifically, our objective function is defined as L = Lrank + β Ldiv, (9) where Lrank is a ranking loss, aiming at distinguishing aligned video-sentence pairs from the unaligned ones. Ldiv is a novel diversity loss, which is proposed to strengthen the matching behaviors between reliable instance-sentence pairs and penalize the unreliable ones from the aligned videosentence pair. β is a scalar which is set to 1 in all our experiments. Ranking Loss. Assume that {v, q} is a semantically aligned video-sentence pair. We define the visual-semantic matching score S between v and q as S(v, q) = max s(q, pn) , n = 1, ..., N , (10) where pn is the n-th proposal generated from the video v, s(q, pn) is the matching behavior computed by Eq. (8), and N is the total number of instance proposals. Suppose that v′ and q′ are negative samples that are not semantically correlated with q and v, respectively. Inspired by (Karpathy and Fei-Fei, 2015), we define the ranking loss as Lrank = X v̸=v′ X q̸=q′ [max(0, S(v, q′) −S(v, q) + ∆)+ max(0, S(v′, q) −S(v, q) + ∆)], (11) where ∆is a margin which is set to 1 in all our experiments. Lrank directly encourages the matching scores of aligned video-sentence pairs to be larger than those of unaligned pairs. Diversity Loss. One limitation of the ranking loss defined in Eq. (11) is that it does not consider the matching behaviors between the sentence and different instance proposals extracted from an aligned video. A prior for video grounding is that only a few instance proposals in the paired video are semantically aligned to the query sentence, while most of the other instance proposals are not. Thus, it is desirable to have a diverse distribution of the matching behaviors {s(q, pn)}N n=1. To encourage a diverse distribution of {s(q, pn)}N n=1, we propose a diversity loss Ldiv to strengthen the matching behaviors between reliable instance-sentence pairs and penalize the unreliable ones during training. Specifically, we first normalize {s(q, pn)}N n=1 by softmax s′(q, pn) = exp(s(q, pn)) PN n′=1 exp(s(q, pn′)) , (12) and then penalize the entropy of the distribution of {s′(q, pn)}N n=1 by defining the diversity loss as Ldiv = − N X n=1 s′(q, pn)log(s′(q, pn)). (13) Note that the smaller Ldiv is, the more diverse {s(q, pn)}N n=1 will be, which implicitly encourages the matching scores of semantically aligned instance-sentence pairs being larger than those of the misaligned pairs. 3.4 Inference Given a testing video and a query sentence, we extract candidate instance proposals, and characterize the matching behavior between each instance proposal and the sentence by the proposed attentive interactor. The instance with the strongest matching behavior is deemed the result of the WSSTG task. 1889 A red bus is making a turn on the road A red bus is making a turn on the road A brown and white dog is lying on the grass and then standing up A large elephant runs in the water from left to right A red bus is making a turn on the road A brown and white dog is lying on the grass and then standing up A large elephant runs in the water from left to right Figure 4: Samples of the newly constructed VIDsentence dataset. Sentences are shown on the top of images and the associated target instances are enclosed with green bounding boxes. 4 VID-sentence Dataset A main challenge for the WSSTG task is the lack of suitable datasets. Existing datasets like TACoS (Regneri et al., 2013) and YouCook (Das et al., 2013) are unsuitable as they do not provide spatio-temporal annotations for target instances in the videos, which are necessary for the WSSTG task for evaluation. To the best of our knowledge, the most suitable existing dataset is the Personsentence dataset provided by (Yamaguchi et al., 2017), which is used for spatio-temporal person search among videos. However, this dataset is too simple for the WSSTG task since it contains only people in the videos. To this end, we contribute a new dataset by annotating videos in ImageNet video object detection dataset (VID) (Russakovsky et al., 2015) with sentence descriptions. We choose VID as the visual materials for two primary reasons. First, it is one of the largest video detection datasets containing videos of diverse categories in complicated scenarios. Second, it provides dense bounding-box annotations and instance IDs which help avoid labor-intensive annotations for spatio-temporal regions of the validation/testing set. VID-sentence Annotation. With 30 categories, VID contains 3826, 555 and 937 videos for training, validation and testing respectively. We first divide videos in training and validation sets1 into trimmed videos based on the provided instance IDs, and delete videos less than 9 frames. As such, there remain 9, 029 trimmed videos in total. In each trimmed video, one instance is identified as a sequence of bounding boxes. A group of annotators are asked to provide sentence descriptions for the target instances. Each target instance is 1Testing set is omitted as its spatial-temporal annotations are unavailable annotated with one sentence description. An instance is discarded if it is too difficult to provide a unique and precise description. After annotation, there are 7, 654 videos with sentence descriptions. We randomly select 6, 582 videos as the training set, and evenly split the remaining videos into the validation and testing sets (i.e., each contains 536 videos). Some examples from the VID-sentence dataset are shown in Fig. 4. Dataset Statistics. To summarize, the created dataset has 6, 582/536/536 spatiotemporal instances with descriptions for training/validation/testing. It covers all 30 categories in VID, such as “car”, “monkey” and “watercraft”. The size of the vocabulary is 1, 823 and the average length of the descriptions is 13.2. Table 1 shows the statistics of our constructed VID-sentence dataset. Compared with the Person-sentence dataset, our VID-sentence dataset has a similar description length but includes more instances and categories. It is important to note that, although VID provides regional annotations for the training set, these annotations are not used in any of our experiments since we focus on weakly-supervised spatio-temporal video grounding. 5 Experiments In this section, we first compare our method with different kinds of baseline methods on the created VID-sentence dataset, followed by the ablation study. Finally, we show how well our model generalizes on the Person-sentence dataset. 5.1 Experimental Settings Baseline Models. Existing weakly-supervised video grounding methods (Huang et al., 2018; Zhou et al., 2018) are not applicable to the WSSTG task. Huang et al. (2018) requires temporal alignment between a sequence of transcription descriptions and the video segments to ground a noun/pronoun in a certain frame, while Zhou et al. (2018) mainly grounds nouns/pronouns in specific frames of videos. As such, we develop three baselines based on DVSA (Karpathy and Fei-Fei, 2015), GroundeR (Rohrbach et al., 2016), and a variant frame-level method modified from (Zhou et al., 2018) for performance comparisons. Following recent grounding methods like (Rohrbach et al., 2016; Chen et al., 2018b), we use the last hidden state of an LSTM encoder as the sentence 1890 Instance Num. Des. Categories train val test length Person 5,437 313 323 13.1 1 Ours 6,582 536 536 13.2 30 Table 1: Statistics of the VID-sentence dataset and previous Person-sentence dataset Yamaguchi et al. (2017). embedding for all the baselines. Since DVSA and GroundeR are originally proposed for image grounding, in order to adapt to video, we consider three methods to encode visual features Fp ∈Rtp×dp including averaging (Avg), NetVLAD (Arandjelovic et al., 2016), and LSTM. For the variant baseline modified from (Zhou et al., 2018), we densely predict each frame to generate a spatio-temporal prediction. Implementation Details. Similar to (Zhou et al., 2018), we use the region proposal network from Faster-RCNN pretrained on MSCOCO (Lin et al., 2014) to extract frame-level region proposals. For each video, we extract 30 bounding boxes for each frame and link them into 30 spatio-temporal tubes with the method (Gkioxari and Malik, 2015). We map the word embedding to 512-dimension before feeding it to the LSTM encoder. Dimension of the hidden state of all the LSTMs is set to 512. Batch size is 16, i.e., 16 videos with total 480 instance proposals and 16 corresponding sentences. We construct positive and negative video-sentence pairs for training within a batch for efficiency, i.e., roughly 16 positive pairs and 240 negative pairs for the triplet construction. SGD is used to optimize the models with a learning rate of 0.001 and momentum of 0.9. We train all the models with 30 epochs. Please refer to supplementary materials for more details. Evaluation Metric. We use the bounding box localization accuracy for evaluation. An output instance is considered as “accurate” if the overlap between the detected instance and the groundtruth is greater than a threshold η. The definition of the overlap is the same as (Yamaguchi et al., 2017), i.e., the average overlap of the bounding boxes in annotated frames. η is set to 0.4, 0.5, 0.6 for extensive evaluations. 5.2 Performance Comparisons Table 2 shows the performance comparisons between our model and the baselines. We additionally show the performance of randomly choosing an instance proposal and the upper bound perforMethods Accuracy 0.4 0.5 0.6 Average Random 8.0 4.3 2.1 4.8 Proposal upper bound 58.6 47.2 36.9 47.6 DVSA+Avg 36.2 29.7 23.5 29.8 DVSA+NetVLAD 31.2 24.8 18.5 24.8 DVSA+LSTM 38.2 31.2 23.5 31.0 GroundeR+Avg 36.7 31.9 25.0 31.2 GroundeR+NetVLAD 26.1 22.2 15.1 21.1 GroundeR+LSTM 36.8 31.2 24.1 30.7 Zhou et al. (2018) 41.6 33.8 27.1 34.2 Ours 44.6 38.2 28.9 37.2 Table 2: Performance comparisons on the proposed VID-sentence dataset. The top entry of all the methods except the upper bound is highlighted in boldface. mance of choosing the instance proposal of the largest overlap with the ground-truth. The results suggest that, 1) models with NetVLAD (Arandjelovic et al., 2016) perform the worst. We suspect that models based on NetVLAD are complicated and the supervisions are too weak to optimize the models sufficiently well. 2) Models with LSTM embedding achieve only comparable performances compared with models based on simple averagingf. It is mainly due to the fact that the power of LSTM has not been fully exploited. 3) The variant method of (Zhou et al., 2018) performs better than both DVSA and GroundeR with various kinds of visual encoding techniques, indicating its power for the task. 4) Our model achieves the best results, demonstrating its effectiveness, showing that our model is better at characterizing the matching behaviors between the query sentence and the visual instances in the video. To compare the methods qualitatively, we show an exemplar sample in Fig. 5. Compared with GroundeR+LSTM and DVSA+LSTM, our method identifies a more accurate instance from the candidate instance proposals. Moreover, the instances generated by our method are more temporally consistent compared with the modified frame-level method (Zhou et al., 2018). This can be attributed to the exploitation of the temporal information during instance generation and attentive interactor in our model. 5.3 Ablation Study To verify the contributions of the proposed attentive interactor and diversity loss, we perform the following ablation study. To be specific, we compare the full method with three variants, includ1891 Description: The white car is running from left to right on the left side of the road. DVSA+LSTM, IoU: 0.172 GroundeR+LSTM, IoU: 0.042 Zhou et al. (2018), IoU: 0.413 Ours, IoU: 0.604 Figure 5: An exemplar of the results by different methods. The sentence is shown on the top. Three frames of the detected results and the ground-truth are respectively bounded with blue lines and green dotted lines. IoU scores between the detected instances and the ground-truth are shown below the images. Best viewed on screen. Methods Accuracy 0.4 0.5 0.6 Average Base 38.2 31.2 23.5 31.0 Base + Div 38.4 32.5 25.0 32.0 Base + Int 42.4 35.1 26.1 34.5 Full method 44.6 38.2 28.9 37.2 Table 3: Ablation study of the proposed attentive interactor and diversity loss. segment Id: 0 segment Id: 1 segment Id: 2 Figure 6: Visualization of the attentive interaction. On the top, we show an instance highlighted in the blue box in three different segments. On the bottom, we show the corresponding distributions of the attention weights. Darker colors mean larger attentive weights. Intuitively, the attention weight matches well with the visual contents such as “puppy” in all three segments and “hand” in the segment with ID 2. Best viewed on screen. ing: 1) removing both the attentive interactor and diversity loss, which is equivalent to the DVSA model using LSTM for encoding both the visual features and sentence features, termed as Base; 2) Base+Div, which is formed by introducing the diversity loss; 3) Base+Int with the attentive interactor module. Table 3 shows the corresponding results. Compared with Base, both the diversity loss and attentive interactor constantly improve the performance. Moreover, to show the effectiveness of the proposed attentive interactor, we visualize the adaptive weight a in Eq. (5). As shown in Fig. 6, 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.0-0.1 0.1-0.2 0.2-0.3 0.3-0.4 0.4-0.5 0.5-0.6 0.6-0.7 0.7-0.8 0.8-0.9 0.9-1.0 Matching behaviors IoU Ours w/o diversity loss Ours Figure 7: Comparison of the distribution of the matching behaviors of instances. Methods Accuracy 0.4 0.5 0.6 Average Random 15.1 7.2 3.5 8.6 Proposal upper bound 89.8 79.9 64.1 77.9 DVSA+Avg 39.8 30.3 19.7 29.9 DVSA+NetVLAD 34.1 25.0 18.3 25.8 DVSA+LSTM 42.7 30.2 20.0 31.0 GroundeR+Avg 45.5 32.2 21.7 33.1 GroundeR+NetVLAD 22.1 16.1 8.6 15.6 GroundeR+LSTM 39.9 28.2 17.7 28.6 Ours w/o Ldiv 57.9 47.7 35.6 47.1 Ours 62.5 52.0 38.4 51.0 Table 4: Performance comparisons on the Personsentence dataset (Yamaguchi et al., 2017). our method adaptively pays more attention to the words that match the instance such as the “puppy” in all three segments and the “hand” in segment with ID 2. To show the effectiveness of the diversity loss, we divide instance proposals in the testing set into 10 groups based on their IoU scores with the ground-truth and then calculate the average matching behaviors of each group, predicted by counterparts with and without the diversity loss. As shown in Fig. 7, the proposed diversity loss Ldiv penalizes the matching behaviors of the instances of lower IoU with ground-truth while strengthens instances of higher IoU. 1892 5.4 Experiments on Person-sentence Dataset We further evaluate our model and the baseline methods on the Person-sentence dataset (Yamaguchi et al., 2017). We ignore the bounding box annotations in the training set and carry out experiments for the proposed WSSTG task. For fair comparisons, all experiments are conducted on the visual feature extractor provided by (Carreira and Zisserman, 2017). Table 4 shows the results. Similarly, the proposed attentive interactor model (without the diversity loss) outperforms all the baselines. Moreover, the diversity loss further improves the performance. Note that the improvement of our model on this dataset is more significant than that on the VID-sentence dataset. The reason might be that the upper bound performance of the Personsentence is much higher than that of the VIDsentence (77.9 for Person-sentence versus 47.6 for VID-sentence on average). This also suggests that the created VID-sentence dataset is more challenging and more suitable as a benchmark dataset. 6 Conclusion In this paper, we introduced a new task, namely weakly-supervised spatio-temporally grounding natural sentence in video. It takes a sentence and a video as input and outputs a spatio-temporal tube from the video, which semantically matches the sentence, with no reliance on spatio-temporal annotations during training. We handled this task based on the multiple instance learning framework. An attentive interactor and a diversity loss were proposed to learn the complicated relationships between the instance proposals and the sentence. Extensive experiments showed the effectiveness of our model. Moreover, we contributed a new dataset, named as VID-sentence, which can serve as a benchmark for the proposed task. References Relja Arandjelovic, Petr Gronat, Akihiko Torii, Tomas Pajdla, and Josef Sivic. 2016. Netvlad: Cnn architecture for weakly supervised place recognition. In CVPR, pages 5297–5307. Joao Carreira and Andrew Zisserman. 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, pages 4724–4733. Jingyuan Chen, Xinpeng Chen, Lin Ma, Zequn Jie, and Tat-Seng Chua. 2018a. Temporally grounding natural sentence in video. In EMNLP. Jingyuan Chen, Lin Ma, Xinpeng Chen, Zequn Jie, and Jiebo Luo. 2019. Localizing natural language in videos. In AAAI. Kan Chen, Trung Bui, Chen Fang, Zhaowen Wang, and Ram Nevatia. 2017. Amc: Attention guided multi-modal correlation learning for image search. In CVPR, pages 6203–6211. Kan Chen, Jiyang Gao, and Ram Nevatia. 2018b. Knowledge aided consistency for weakly supervised phrase grounding. arXiv preprint arXiv:1803.03879. Xinpeng Chen, Lin Ma, Jingyuan Chen, Zequn Jie, Wei Liu, and Jiebo Luo. 2018c. Real-time referring expression comprehension by single-stage grounding network. In arXiv: 1812.03426. Volkan Cirik, Taylor Berg-Kirkpatrick, and LouisPhilippe Morency. 2018. Using syntax to ground referring expressions in natural images. In AAAI. P. Das, C. Xu, R. F. Doell, and J. J. Corso. 2013. A thousand frames in just a few words: Lingual description of videos through latent topics and sparse object stitching. In CVPR. Jiyang Gao, Runzhou Ge, Kan Chen, and Ram Nevatia. 2018. Motion-appearance co-memory networks for video question answering. arXiv preprint arXiv:1803.10906. Georgia Gkioxari and Jitendra Malik. 2015. Finding action tubes. In CVPR, pages 759–768. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, and Trevor Darrell. 2016. Natural language object retrieval. In CVPR, pages 4555– 4564. De-An Huang, Shyamal Buch, Lucio Dery, Animesh Garg, Li Fei-Fei, and Juan Carlos Niebles. 2018. Finding “it”: Weakly-supervised reference-aware visual grounding in instructional videos. In CVPR. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In CVPR, pages 3128–3137. Chen Kong, Dahua Lin, Mohit Bansal, Raquel Urtasun, and Sanja Fidler. 2014. What are you talking about? text-to-image coreference. In CVPR, pages 3558– 3565. Jianan Li, Yunchao Wei, Xiaodan Liang, Fang Zhao, Jianshu Li, Tingfa Xu, and Jiashi Feng. 2017. Deep attribute-preserving metric learning for natural language object retrieval. In MM, pages 181–189. 1893 Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In ECCV, pages 740– 755. Lin Ma, Zhengdong Lu, and Hang Li. 2016. Learning to answer questions from image using convolutional neural network. In AAAI. Lin Ma, Zhengdong Lu, Lifeng Shang, and Hang Li. 2015. Multimodal convolutional neural networks for matching image and sentence. In ICCV. Cynthia Matuszek, Nicholas FitzGerald, Luke Zettlemoyer, Liefeng Bo, and Dieter Fox. 2012. A joint model of language and perception for grounded attribute learning. arXiv preprint arXiv:1206.6423. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111–3119. Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models. In ICCV, pages 2641–2649. Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. 2013. Grounding action descriptions in videos. Transactions of the Association for Computational Linguistics, 1:25–36. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, pages 91–99. Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, and Bernt Schiele. 2016. Grounding of textual phrases in images by reconstruction. In ECCV, pages 817–834. Anna Rohrbach, Marcus Rohrbach, Siyu Tang, Seong Joon Oh, and Bernt Schiele. 2017. Generating descriptions with grounded and co-referenced people. arXiv preprint arXiv:1704.01518, 3. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252. Mohammad Amin Sadeghi and Ali Farhadi. 2011. Recognition using visual phrases. In CVPR, pages 1745–1752. Peng Tang, Xinggang Wang, Song Bai, Wei Shen, Xiang Bai, Wenyu Liu, and Alan Loddon Yuille. 2018. Pcl: Proposal cluster learning for weakly supervised object detection. IEEE transactions on pattern analysis and machine intelligence. Peng Tang, Xinggang Wang, Xiang Bai, and Wenyu Liu. 2017. Multiple instance detection network with online instance classifier refinement. In CVPR. Arun Balajee Vasudevan, Dengxin Dai, and Luc Van Gool. 2018. Object referring in videos with language and human gaze. arXiv preprint arXiv:1801.01582. Bairui Wang, Lin Ma, Wei Zhang, and Wei Liu. 2018a. Reconstruction network for video captioning. In CVPR. Jingwen Wang, Wenhao Jiang, Lin Ma, Wei Liu, and Yong Xu. 2018b. Bidirectional attentive fusion with context gating for dense video captioning. In CVPR. Liwei Wang, Yin Li, and Svetlana Lazebnik. 2016a. Learning deep structure-preserving image-text embeddings. In CVPR, pages 5005–5013. Mingzhe Wang, Mahmoud Azab, Noriyuki Kojima, Rada Mihalcea, and Jia Deng. 2016b. Structured matching for phrase localization. In ECCV, pages 696–711. Fanyi Xiao, Leonid Sigal, and Yong Jae Lee. 2017. Weakly-supervised visual grounding of phrases with linguistic structures. arXiv preprint arXiv:1705.01371. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML, pages 2048–2057. Masataka Yamaguchi, Kuniaki Saito, Yoshitaka Ushiku, and Tatsuya Harada. 2017. Spatio-temporal person retrieval via natural language queries. In ICCV. Haonan Yu and Jeffrey Mark Siskind. 2015. Sentence directed video object codetection. arXiv preprint arXiv:1506.02059. Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L Berg. 2018. Mattnet: Modular attention network for referring expression comprehension. In CVPR. Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L Berg. 2017. A joint speakerlistener-reinforcer model for referring expressions. In CVPR, volume 2. Hanwang Zhang, Yulei Niu, and Shih-Fu Chang. 2018. Grounding referring expressions in images by variational context. In CVPR, pages 4158–4166. Yuting Zhang, Luyao Yuan, Yijie Guo, Zhiyuan He, IAn Huang, and Honglak Lee. 2017. Discriminative bimodal networks for visual localization and detection with natural language queries. In CVPR. 1894 Fang Zhao, Jianshu Li, Jian Zhao, and Jiashi Feng. 2018. Weakly supervised phrase localization with multi-scale anchored transformer network. In CVPR, pages 5696–5705. Luowei Zhou, Nathan Louis, and Jason J Corso. 2018. Weakly-supervised video object grounding from text by loss weighting and object interaction. BMVC.
2019
183
The PhotoBook Dataset: Building Common Ground through Visually-Grounded Dialogue Janosch Haber∗Tim Baumg¨artner♣Ece Takmaz♣Lieke Gelderloos‡ Elia Bruni† and Raquel Fern´andez♣ ♣University of Amsterdam, ∗Queen Mary University of London ‡Tilburg University, †Universitat Pompeu Fabra {raquel.fernandez|ece.takmaz}@uva.nl [email protected], {baumgaertner.t|elia.bruni}@gmail.com [email protected] Abstract This paper introduces the PhotoBook dataset, a large-scale collection of visually-grounded, task-oriented dialogues in English designed to investigate shared dialogue history accumulating during conversation. Taking inspiration from seminal work on dialogue analysis, we propose a data-collection task formulated as a collaborative game prompting two online participants to refer to images utilising both their visual context as well as previously established referring expressions. We provide a detailed description of the task setup and a thorough analysis of the 2,500 dialogues collected. To further illustrate the novel features of the dataset, we propose a baseline model for reference resolution which uses a simple method to take into account shared information accumulated in a reference chain. Our results show that this information is particularly important to resolve later descriptions and underline the need to develop more sophisticated models of common ground in dialogue interaction.1 1 Introduction The past few years have seen an increasing interest in developing computational agents for visually grounded dialogue, the task of using natural language to communicate about visual content in a multi-agent setup. The models developed for this task often focus on specific aspects such as image labelling (Mao et al., 2016; Vedantam et al., 2017), object reference (Kazemzadeh et al., 2014; De Vries et al., 2017a), visual question answering (Antol et al., 2015), and first attempts of visual dialogue proper (Das et al., 2017), but fail to produce consistent outputs over a conversation. 1The PhotoBook dataset is being released by the Dialogue Modelling Group led by Raquel Fern´andez at the University of Amsterdam. The core of this work was done while Janosch Haber and Elia Bruni were affiliated with the group. We hypothesise that one of the main reasons for this shortcoming is the models’ inability to effectively utilise dialogue history. Human interlocutors are known to collaboratively establish a shared repository of mutual information during a conversation (Clark and Wilkes-Gibbs, 1986; Clark, 1996; Brennan and Clark, 1996). This common ground (Stalnaker, 1978) then is used to optimise understanding and communication efficiency. Equipping artificial dialogue agents with a similar representation of dialogue context thus is a pivotal next step in improving the quality of their dialogue output. To facilitate progress towards more consistent and effective conversation models, we introduce the PhotoBook dataset: a large collection of 2,500 human-human goal-oriented English conversations between two participants, who are asked to identify shared images in their respective photo books by exchanging messages via written chat. This setup takes inspiration from experimental paradigms extensively used within the psycholinguistics literature to investigate partnerspecific common ground (for an overview, see Brown-Schmidt et al., 2015), adapting them to the requirements imposed by online crowdsourcing methods. The task is formulated as a game consisting of five rounds. Figure 1 shows an example of a participant’s display. Over the five rounds of a game, a selection of previously displayed images will be visible again, prompting participants to re-refer to images utilising both their visual context as well as previously established referring expressions. The resulting dialogue data therefore allows for tracking the common ground developing between dialogue participants. We describe in detail the PhotoBook task and the data collection, and present a thorough analysis of the dialogues in the dataset. In addition, to showcase how the new dataset may be exploited for computational modelling, we propose a reference resolution baseline model trained to identify target images being discussed in a given dialogue segment. The model uses a simple method to take into account information accumulated in a reference chain. Our results show that this information is particularly important to resolve later descriptions and highlight the importance of developing more sophisticated models of common ground in dialogue interaction. The PhotoBook dataset, together with the data collection protocol, the automatically extracted reference chains, and the code used for our analyses and models are available at the following site: https://dmg-photobook.github.io. 2 Related Work Seminal works on cooperative aspects of dialogue have developed their hypotheses and models based on a relatively small number of samples collected through lab-based conversation tasks (e.g., Krauss and Weinheimer, 1964, 1966; Clark and WilkesGibbs, 1986; Brennan and Clark, 1996; Anderson et al., 1991). Recent datasets inspired by this line of work include the REX corpora (Takenobu et al., 2012) and PentoRef (Zarrieß et al., 2016). With the development of online data collection methods (von Ahn et al., 2006) a new, game-based approach to quick and inexpensive collection of dialogue data became available. PhotoBook builds on these traditions to provide a large-scale dataset suitable for data-driven development of computational dialogue agents. The computer vision community has recently developed large-scale datasets for visually grounded dialogue (Das et al., 2017; De Vries et al., 2017b). These approaches extend earlier work on visual question answering (Antol et al., 2015) to a multi-turn setup where two agents, each with a pre-determined Questioner or Answerer role, exchange sequences of questions and answers about an image. While data resulting from these tasks provides interesting opportunities to investigate visual grounding, it suffers from fundamental shortcomings with respect to the collaborative aspects of natural goal-oriented dialogue (e.g., fixed, pairwise structuring of question and answers, no extended dialogue history). In contrast, PhotoBook includes natural and free-from dialogue data with a variety of dialogue acts and opportunities for participant collaboration. Resolving referring expressions in the visual modality has also been studied in computer vision. Datasets such as ReferIt (Kazemzadeh et al., 2014), Flicker30k Entities (Plummer et al., 2015) and Visual Genome (Krishna et al., 2017) map referring expressions to regions in a single image. Referring expressions in the PhotoBook dataset differ from this type of data in that the candidate referents are independent but similar images and, most importantly, are often part of a reference chain in the participants’ dialogue history. 3 Task Description and Setup In the PhotoBook task, two participants are paired for an online multi-round image identification game. In this game, participants are shown collections of images that resemble the page of a photo book (see Figure 1). Each of these collections is a randomly ordered grid of six similar images depicting everyday scenes extracted from the MS COCO Dataset (Lin et al., 2014). On each page of the photo book, some of the images are present in the displays of both participants (the common images). The other images are each shown to one of the participants only (different). Three of the images in each display are highlighted through a yellow bar under the picture. The participants are tasked to mark these highlighted target images as either common or different by chatting with their partner.2 The PhotoBook task is symmetric, i.e., participants do not have predefined roles such as instruction giver and follower, or questioner and answerer. Consequently, both participants can freely and naturally contribute to the conversation, leading to more natural dialogues. Once the two participants have made their selections on a given page, they are shown a feedback screen and continue to the next round of the game, a new page of the photo book displaying a different grid of images. Some of the images in this grid will be new to the game while others will have appeared before. A full game consists of labelling three highlighted target images in each of five consecutive rounds. Each highlighted image is displayed exactly five times throughout a game while the display of images and the order of rounds is randomised to prevent participants from detecting any patterns. As 2Pilot studies showed that labelling all six images took participants about half an hour, which appeared to be too long for the online setting, resulting in large numbers of disconnects and incomplete games. Figure 1: Screenshot of the Amazon Mechanical Turk user interface designed to collect the PhotoBook dataset. a result of this carefully designed setup, dialogues in the PhotoBook dataset contain multiple descriptions of each of the target images and thus provide a valuable resource for investigating participant cooperation, and specifically collaborative referring expression generation and resolution with respect to the conversation’s common ground. Image Sets The task setup requires each game of five rounds to display 12 unique but similar images to elicit non-trivial referring expressions. We use the object category annotations in MS COCO to group all landscape, unmarked, colour images where the two largest objects belong to the same category across all images in the set (e.g., all images in the set prominently feature a person and a cat).3 This produced 30 sets of at least 20 images from which 12 were selected at random. As a given game highlights only half of the images from a given set, each image set produces two different game sets with different target images to be highlighted, for a total of 60 unique games and 360 unique images. More details on the PhotoBook setup and image sets are provided in Appendix A. 4 Data Collection We use the ParlAI framework (Miller et al., 2017) to implement the task and interface with crowdsourcing platform Amazon Mechanical Turk (AMT) to collect the data. To control the quality of collected dialogues, we require AMT workers to be native English speakers and to have completed at least 100 other tasks on AMT with a 3All images where the two largest objects cover less than 30k pixels (∼10% of an average COCO image) were rejected. minimum acceptance rate of 90%. Workers are paired through an algorithm based on whether or not they have completed the PhotoBook task before and which of the individual games they have played. In order to prevent biased data, workers can complete a maximum of five games, each participant can complete a given game only once, and the same pair of participants cannot complete more than one game. Participants are instructed about the task and first complete a warming-up round with only three images per participant (two of them highlighted). In order to render reference grounding as clean as possible and facilitate automatic processing of the resulting dialogue data, participants are asked to try to identify the common and different images as quickly as possible, only describe a single image per message, and directly select an image’s label when they agree on it. The compensation scheme is based on an average wage of 10 USD per hour (Hara et al., 2018). See Appendix B for a full account of the instructions and further details on participant payment. During data collection, we recorded anonymised participant IDs, the author, timestamp and content of all sent messages, label selections and button clicks, plus self-reported collaboration performance scores. For a period of two months, data collection produced human-human dialogues for a total of 2,506 completed games. The resulting PhotoBook dataset contains a total of 164,615 utterances, 130,322 actions and spans a vocabulary of 11,805 unique tokens. Each of the 60 unique game sets was played between 15 and 72 times, with an average of 41 1 2 3 4 5 Game Round 80 100 120 140 160 180 Blue: Duration (in Seconds) 5.600 5.625 5.650 5.675 5.700 5.725 5.750 5.775 5.800 Red: Round Score (out of 6) (a) 1 2 3 4 5 Game Round 0.48 0.50 0.52 0.54 0.56 Content Token/Total Token Ratio (b) 1 2 3 4 5 Game Round 0.0 0.2 0.4 0.6 0.8 1.0 Ratio of New Content Tokens (c) NOUN VERB ADJ ADV POS-Tag -30% -20% -10% 0% 10% 20% 30% Change in relaticve Distribution (d) Figure 2: (a) Average completion times (solid blue) and scores (dashed red) per game round. (b) Ratio of content tokens over total token count per round with best linear fit. (c) Ratio of new content tokens over total content token count per round. (d) Relative change in distribution of main content POS between the first and last game round. games per set. The task was completed by 1,514 unique workers, of which 472 only completed a single game, 448 completed between two and four games, and 594 the maximum of five games. Completing a full five-round game took an average of 14.2 minutes. With three highlighted images per player per round, during a full game of five rounds 30 labelling decisions have to be made. On average, participants correctly labelled 28.62 out of these 30. 5 Dataset Analysis In this paper, we focus on the analysis of participants’ interaction during a game of five labelling rounds.4 Our data here largely confirms the observations concerning participants’ task efficiency and language use during a multi-round communication task made by seminal, small-scale experiments such as those by Krauss and Weinheimer (1964); Clark and Wilkes-Gibbs (1986); Brennan and Clark (1996) and, due to its scale, offers additional aspects for further investigation. 5.1 Task Efficiency Completing the first round of the PhotoBook task takes participants an average of almost three minutes. Completing the fifth round on the other hand takes them about half that time. As Figure 2a shows, this decline roughly follows a negative logarithmic function, with significant differences between rounds 1, 2, 3 and 4, and plateauing towards the last round. The number of messages sent by participants as well as the average message length follow a similar pattern, significantly decreasing 4Tracking participant IDs, for example, also allows for an analysis of differences in behaviour across different games. between consecutive game rounds. The average number of correct image labels, on the other hand, significantly increases between the first and last round of the game (cf. the red dashed graph in Figure 2a). As a result, task efficiency as calculated by points per minute significantly increases with each game round. 5.2 Linguistic Properties of Utterances To get a better understanding of how participants increase task efficiency and shorten their utterances, we analyse how the linguistic characteristics of messages change over a game. We calculated a normalised content word ratio by dividing the count of content words by the total token count.5 This results in an almost linear increase of content tokens over total token ratio throughout a game (average Pearson’s r per game of 0.34, p ≪0.05, see Figure 2b). With referring expressions and messages in general getting shorter, content words thus appear to be favoured to remain. We also observe that participants reuse these content words. Figure 2c shows the number of novel content tokens per game round, which roughly follows a negative logarithmic function. This supports the hypothesis of participants establishing a conceptual pact on the referring expression attached to a specific referent: Once accepted, a referring expression is typically refined through shortening rather than by reformulating or adding novel information (cf., Brennan and Clark, 1996). We also analysed in more detail the distribution of word classes per game round by tagging messages with the NLTK POS-Tagger. Figure 2d 5We filtered out function words with NLTK’s stopword list http://www.nltk.org/. displays the relative changes in content-word-class usage between the first round and last round of a game. All content word classes but verbs show a relative increase in occurrence, most prominently nouns with a 20% relative increase. The case of adverbs, which show a 12% relative increase, is particular: Manual examination showed that most adverbs are not used to described images but rather to flag that a given image has already appeared before or to confirm/reject (‘again’ and ‘too’ make up 21% of all adverb occurrences; about 36% are ‘not’, ‘n’t’ and ‘yes’). These results indicate that interlocutors are most likely to retain the nouns and adjectives of a developing referring expression, while increasingly dropping verbs, as well as prepositions and determiners. A special role here takes definite determiner ‘the’, which, in spite of the stark decline of determiners in general, increases by 13% in absolute occurrence counts between the first and last round of a game, suggesting a shift towards known information. Finally, in contrast to current visual dialogue datasets (Das et al., 2017; De Vries et al., 2017b) which exclusively contain sequences of questionanswer pairs, the PhotoBook dataset includes diverse dialogue acts. Qualitative examination shows that, not surprisingly, a large proportion of messages include an image description. These descriptions however are interleaved with clarification questions, acceptances/rejections, and acknowledgements. For an example, see the dialogue excerpt in Figure 1. Further data samples are available in Appendix C. A deeper analysis of the task-specific dialogue acts would require manual annotation, which could be added in the future. 5.3 Reference Chains In a small-scale pilot study, Ilinykh et al. (2018) find that the pragmatics of goal-oriented dialogue leads to consistently more factual scene descriptions and reasonable referring expressions than traditional, context-free labelling of the same images. We argue that in the PhotoBook task referring expressions are not only adapted based on the goal-oriented nature of the interaction but also by incorporating the developing common ground between the participants. This effect becomes most apparent when collecting all referring expressions for a specific target image produced during the different rounds of a game in its coreference chain. The following excerpt displays such a coreference chain extracted from the PhotoBook dataset: 1. A: Do you have a boy with a teal coloured shirt with yellow holding a bear with a red shirt? 2. B: Boy with teal shirt and bear with red shirt? 3. A: Teal shirt boy? To quantify the effect of referring expression refinement, we compare participants’ first and last descriptions to a given target image with the image’s captions provided in the MS COCO dataset. For this purpose we manually annotated the first and last expressions referring to a set of six target images across ten random games in the PhotoBook dataset. Several examples are provided in Appendix C. Table 1 shows their differences in token count before and after filtering for content words with the NLTK stopword list. Source # Tokens # Content Distance COCO captions 11.167 5.255 – First description 9.963 5.185 0.091 Last description 5.685 5.128 0.156 Table 1: Avg. token counts in COCO captions and the first and last descriptions in PhotoBook, plus their cosine distance to the caption’s cluster mean vector. The distance between first and last descriptions is 0.083. Before filtering, first referring expressions do not significantly differ in length from the COCO captions. Last descriptions however are significantly shorter than both the COCO captions and first descriptions. After filtering for content words, no significant differences remain. We also calculate the cosine distance between the three different descriptions based on their average word vectors.6 Non-function words here should not significantly alter an utterance’s mean word vector, which is confirmed in our results. Before as well as after filtering, the distance between last referring expression and COCO captions is almost double the distance between the first referring expressions and the captions (see last column in Table 1). Comparing the distribution of word classes in the captions and referring expressions finally revealed a similar distribution in first referring expressions and COCO captions, and a significantly different distribution in last referring expressions, among other things doubling the relative frequency of nouns. 6We average pretrained word vectors from Word2Vec (Mikolov et al., 2013) in gensim (https: //radimrehurek.com/gensim/) to generate utterance vectors. The five COCO captions are represented by a cluster mean vector. 6 Reference Chain Extraction Collecting reference chains from dialogue data is a non-trivial task which normally requires manual annotation (Yoshida, 2011). Here we propose a simple procedure to automatically extract reference chains made up of dialogue segments. A dialogue segment is defined as a collection of consecutive utterances that, as a whole, discuss a given target image and include expressions referring to it. All dialogue segments within a game that refer to the same target image form its reference chain. In order to automatically segment the collected dialogues in this way, we developed a rule-based heuristics exploiting participants’ image labelling actions to detect segment boundaries and their respective targets. The heuristics is described in detail in Appendix D. Since the task instructs participants to label images as soon as they identify them as either common or different, the majority of registered labelling actions can be assumed to conclude the current dialogue segment. The following excerpt displays a segment extracted from a game’s first round, discussing one target image before a participant selects its label: B: I have two kids (boys) holding surf boards walking. A: I do not have that one. B: marks #340331 as different Image selections however do not always delimit segments in the cleanest way possible. For example, a segment may refer to more than one target image, i.e., the participants may discuss two images and only after this discussion be able to identify them as common/different. 72% of the extracted segments are linked to only one target; 25% to two. Moreover, reference chains do not necessarily contain one segment for each of the five game rounds. They may contain fewer or more segments than rounds in a game, since participants may discuss the same image more than once in a single round and some of the extracted chains may be noisy, as explained in the evaluation section below. 75% of the automatically extracted chains contain three to six segments. Evaluation To evaluate the segmentation, two annotators independently reviewed segments extracted from 20 dialogues. These segments were annotated by marking all utterances u in a segment S with target images I that refer to an image i′ where i′ /∈I to determine precision, and marking all directly preceding and succeeding utterances u′ outside of a segment S that refer to a target image i ∈I to determine recall. Additionally, if a segment S did not include any references to any of its target images I, it was labelled as improper. 95% of annotated segments were assessed to be proper (Cohen’s κ of 0.87), with 28.4% of segments containing non-target references besides target references (Cohen’s κ of 0.97). Recall across all reviewed segments is 99% (Cohen’s κ of 0.93). 7 Experiments on Reference Resolution Using the automatically extracted dialogue segments, we develop a reference resolution model that aims at identifying the target images referred to in a dialogue segment. We hypothesise that later segments within a reference chain might be more difficult to resolve, because they rely on referring expressions previously established by the dialogue participants. As a consequence, a model that is able to keep track of the common ground should be less affected by this effect. To investigate these issues, we experiment with two conditions: In the NO-HISTORY condition, the model only has access to the current segment and to the visual features of each of the candidate images. In the HISTORY condition, on the other hand, the model also has access to the previous segments in the reference chain associated with each of the candidate images, containing the linguistic common ground built up by the participants. We keep our models very simple. Our aim is to propose baselines against which future work can be compared. 7.1 Data The automatically extracted co-reference chains per target image were split into three disjoint sets for training (70%), validation (15%) and testing (15%), aiming at an equal distribution of target image domains in all three sets. The raw numbers per data split are shown in Table 2. Split Chains Segments Targets Non-Targets Train 12,694 30,992 40,898 226,993 Val 2,811 6,801 9,070 50,383 Test 2,816 6,876 9,025 49,774 Table 2: Number of reference chains, dialogue segments, and image types (targets and non-targets) in each data split. Dot product Dot product Dot product LSTM Segment Encoder A: two people with bikes next to a train both wearing the same helmet? B: yes i have that one 0.32 0.21 0.78 Seg 1 Seg 2 Seg 3 Candidate images Segment to  be resolved ... LSTM Ref Chain Encoder Image has not been discussed before; no available chain ... Seg 1 Seg 2 LSTM Ref Chain Encoder Figure 3: Diagram of the model in the HISTORY condition. For simplicity, we only show three candidate images. Some candidate images may not have a reference chain associated with them, while others may be linked to chains of different length, reflecting how many times an image has been referred to in the dialogue so far. In this example, the model predicts that the bottom candidate is the target referent of the segment to be resolved. 7.2 Models Our resolution model encodes the linguistic features of the dialogue segment to be resolved with a recurrent neural network with Long Short-Term Memory (LSTM, Hochreiter and Schmidhuber, 1997). The last hidden state of the LSTM is then used as the representation for the dialogue segment. For each candidate image in the context, we obtain image features using the activations from the penultimate layer of a ResNet-152 (He et al., 2016) pre-trained on ImageNet (Deng et al., 2009). These image features, which are of size 2048, are projected onto a smaller dimension equal to the hidden dimension of LSTM units. Projected image features go through ReLU nonlinearity and are normalised to unit vectors. To assess which of the candidate images is a target, in the NO-HISTORY condition we take the dot product between the dialogue segment representation and each image feature vector, ending up with scalar predictions for all N images in the context: s = {s0, ..., sN}. For the HISTORY condition, we propose a simple mechanism for taking into account linguistic common ground about each image. For each candidate image, we consider the sequence of previous segments within its reference chain. This shared linguistic background is encoded with another LSTM, whose last hidden state is added to the corresponding image feature that was projected to the same dimension as the hidden state of the LSTM. The resulting representation goes through ReLU, is normalised, and compared to the target dialogue segment representation via dot product, as in NO-HISTORY (see Figure 3). As an ablation study, we train a HISTORY model without visual features. This allows us to establish a baseline performance only involving language and to study whether the HISTORY model with visual features learns an efficient multi-modal representation. We hypothesise that some descriptions can be successfully resolved by just comparing the current segment and the reference chain in the history (e.g., when descriptions are detailed and repeated). However, performance should be significantly lower than with visual features, for example when referring expressions are ambiguous. Sigmoid is applied element-wise over the scalar predictions in all three models. As a result, each image can be assessed independently using a decision threshold (set to 0.5). This allows the model to predict multiple images as referents.7 We use Binary Cross Entropy Loss to train the models. Since distractor images make up 84.74% of the items to be classified in the training set and target images constitute only the 15.26% of them, we provided 84.74/15.26 ≈5.5 as the weight of the target class in the loss function. All models were implemented in PyTorch, trained with a learning rate of 0.001 and a batch size of 512. The dimension of the word embeddings and the hidden dimensions of the LSTM units were all set to 512. The parameters were optimised using Adam (Kingma and Ba, 2014). The models were trained until the validation loss stopped improving, after which we selected the 7As explained in Section 6, 25% of segments are linked to two targets; 3% to more. See Appendix D for further details. model with the best weighted average of the target and non-target F-scores. 7.3 Results We report precision, recall, and F-score for the target images in Table 3. Results for non-target images are available in Appendix E. Every candidate image contributes individually to the scores, i.e., the task is not treated as multi-label for evaluation purposes. Random baseline scores are obtained by taking the average of 10 runs with a model that predicts targets and non-targets randomly for the images in the test set. Given the low ratio of target to distractor images (see Table 2 in Section 7.1), the task of identifying target images is challenging and the random baseline achieves an F-score below 30%. The results show that the resolution capabilities of our model are well above the baseline. The HISTORY model achieves higher recall and F-score than the NO-HISTORY model, while precision is comparable across these two conditions. Model Precision Recall F1 Random baseline 15.34 49.95 23.47 NO-HISTORY 56.65 75.86 64.86 HISTORY 56.66 77.41 65.43 HISTORY/No image 35.66 63.18 45.59 Table 3: Results for the target images in the test set. For a more in-depth analysis of the results, we examine how precision and recall vary depending on the position of the to-be-resolved segment within a reference chain. Figure 4 displays this information. As hypothesised, we observe that resolution performance is lower for later segments in a reference chain. For example, while precision is close to 60% for first mentions (position 1 in a chain), it declines by around 20 points for last mentions. 1 2 3 4 5 6 40 60 80 100 Precision No history History No image 1 2 3 4 5 6 40 60 80 100 Recall Figure 4: Precision and recall (y axis) for target images, given the position of the segment in a reference chain (x axis). The plots in Figure 4 also show the impact of taking into account the common ground accumulated over a reference chain. This is most prominent with regard to the recall of target images. The HISTORY model yields higher results than the NOHISTORY model when it comes to resolving segments that refer to an image that has already been referred to earlier within the dialogue (positions > 1). Yet, the presence of linguistic context does not fully cancel out the effect observed above: The performance of the HISTORY model also declines for later segments in a chain, indicating that more sophisticated methods are needed to fully exploit shared linguistic information. Experiments with the HISTORY model without visual features (HISTORY/No image) confirm our hypothesis. The HISTORY model outperforms the “blind” model by about 21 points in precision and 14 points in recall. We thus conclude that even our simple fusion mechanism already allows for learning an efficient multimodal encoding and resolution of referring expressions. 7.4 Qualitative Analysis The quantitative dataset analysis presented in Section 5 showed that referring expressions become shorter over time, with interlocutors being most likely to retain nouns and adjectives. Qualitative inspection of the reference chains reveals that this compression process can lead to very nonstandard descriptions. We hypothesise that the degree to which the compressed descriptions rely on visual information has an impact on the performance of the models. For example, the NOHISTORY model can be effective when the participants converge on a non-standard description which highlights a visual property of the target image that clearly discriminates it from the distractors. This is the case in the example shown on the left-hand side of Figure 5. The target image shows a woman holding what seems to be a plastic carrot. This feature stands out in a domain where all the candidate images include a person and a TV.8 After an initial, longer description (‘a woman sitting in front of a monitor with a dog wallpaper while holding a plastic carrot’), the participants use the much more compact description ‘the carrot lady’. Arguably, given the saliency of the carrot in the given context, relying on the preceding linguistic history is not critical in this case, and thus both the NO-HISTORY and the HISTORY model succeed in 8The COCO annotations for this image seem to be slightly off, as the image is tagged as including a TV but in fact shows a computer monitor. Figure 5: Reference chain for each of the two displayed images. The dialogue segments in the chains are slightly simplified for space reasons. Left: Both the HISTORY and the NO-HISTORY models succeed at identifying this image as the target of the segment to be resolved. Right: The NO-HISTORY model fails to recognise this image as the target of the segment to be resolved, while the HISTORY model succeeds. The distractor images for these two examples are available in Appendix E. identifying the target. We observe that the HISTORY model is particularly helpful when the participants converge on a non-standard description of a target image that cannot easily be grounded on visual information. The image and reference chain on the right-hand side of Figure 5 illustrate this point, where the description to be resolved is the remarkably abstract ‘strange’. Here the HISTORY model succeeds while the NO-HISTORY model fails. As in the previous example, the referring expression in the first segment of the reference chain for this image (‘a strange bike with two visible wheels in the back’) includes more descriptive content – indeed, it is similar to a caption, as shown by our analysis in Section 5.3. By exploiting shared linguistic context, the HISTORY model can not only interpret the non-standard phrase, but also recover additional properties of the image not explicit in the segment to be resolved, which presumably help to ground it. 8 Conclusion We have presented the first large-scale dataset of goal-oriented, visually grounded dialogues for investigating shared linguistic history. Through the data collection’s task setup, participants repeatedly refer to a controlled set of target images, which allows them to improve task efficiency if they utilise their developing common ground and establish conceptual pacts (Brennan and Clark, 1996) on referring expressions. The collected dialogues exhibit a significant shortening of utterances throughout a game, with final referring expressions starkly differing from both standard image captions and initial descriptions. To illustrate the potential of the dataset, we trained a baseline reference resolution model and showed that information accumulated over a reference chain helps to resolve later descriptions. Our results suggest that more sophisticated models are needed to fully exploit shared linguistic history. The current paper showcases only some of the aspects of the PhotoBook dataset, which we hereby release to the public (https:// dmg-photobook.github.io). In future work, the data can be used to further investigate common ground and conceptual pacts; be extended through manual annotations for a more thorough linguistic analysis of co-reference chains; exploit the combination of vision and language to develop computational models for referring expression generation; or use the PhotoBook task in the ParlAI framework for Turing-Test-like evaluation of dialogue agents. Acknowledgements The PhotoBook dataset was collected thanks to funding in the form of a Facebook ParlAI Research Award to Raquel Fern´andez. We are grateful to the ParlAI team, in particular Jack Urbanek and Jason Weston, for their continued support and assistance in using the ParlAI framework during the data collection, which was coordinated by Janosch Haber. Janosch Haber is currently supported by the DALI project, ERC Grant 695662. We warmly thank the volunteers who took part in pilot experiments. Thanks also go to Aashish Venkatesh for his valuable contributions in brainstorming sessions about the task design and for preliminary modelling efforts. Finally, we are grateful to the participants of two Faculty Summits at Facebook AI Research NYC for their feedback. References Luis von Ahn, Mihir Kedia, and Manuel Blum. 2006. Verbosity: A game for collecting common-sense facts. In Proceedings of the Conference on Human Factors in Computing Systems. Anne H. Anderson, Miles Bader, Ellen Gurman Bard, Elizabeth Boyle, Gwyneth Doherty, Simon Garrod, Stephen Isard, Jacqueline Kowtko, Jan McAllister, Jim Miller, et al. 1991. The HCRC Map Task corpus. Language and Speech, 34(4):351–366. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answering. In Proceedings of ICCV. Susan E. Brennan and Herbert H. Clark. 1996. Conceptual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22:1482–1493. Sarah Brown-Schmidt, Si On Yoon, and Rachel Anna Ryskin. 2015. People as contexts in conversation. In Psychology of Learning and Motivation, volume 62, chapter 3, pages 59–99. Elsevier. Herbert H. Clark. 1996. Using Language. ’Using’ Linguistic Books. Cambridge University Press. Herbert H. Clark and Deanna Wilkes-Gibbs. 1986. Referring as a collaborative process. Cognition, 22(1):1 – 39. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e M.F. Moura, Devi Parikh, and Dhruv Batra. 2017. Visual Dialog. In Proceedings of CVPR. Harm De Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron Courville. 2017a. Guesswhat?! visual object discovery through multi-modal dialogue. In Proc. of CVPR. Harm De Vries, Florian Strub, J´er´emie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron Courville. 2017b. Modulating early visual processing by language. In Proceedings of NIPS. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In Proc. of CVPR. Kotaro Hara, Abigail Adams, Kristy Milland, Saiph Savage, Chris Callison-Burch, and Jeffrey P Bigham. 2018. A Data-Driven Analysis of Workers’ Earnings on Amazon Mechanical Turk. In Proceedings of the Conference on Human Factors in Computing Systems. He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. In Proceedings of ACL. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of CVPR. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Nikolai Ilinykh, Sina Zarrieß, and David Schlangen. 2018. The task matters: Comparing image captioning and task-based dialogical image description. In Proceedings of INLG. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara L. Berg. 2014. ReferIt Game: Referring to Objects in Photographs of Natural Scenes. In Proceedings of EMNLP. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Robert M. Krauss and Sidney Weinheimer. 1964. Changes in reference phrases as a function of frequency of usage in social interaction: a preliminary study. Psychonomic Science, 1(1):113–114. Robert M. Krauss and Sidney Weinheimer. 1966. Concurrent feedback, confirmation, and the encoding of referents in verbal communication. Journal of Personality and Social Psychology, 4(3):343–346. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual Genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73. Rensis Likert. 1932. A technique for the measurement of attitudes. Archives of Psychology, 22 140:55–55. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C. Lawrence Zitnick. 2014. Microsoft COCO: Common Objects in Context. In Proc. of ECCV. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous object descriptions. In Proceedings of CVPR. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS. Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A Dialog Research Software Platform. In Proceedings of EMNLP. Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models. In Proceedings of ICCV. Robert Stalnaker. 1978. Assertion. In P. Cole, editor, Pragmatics, volume 9 of Syntax and Semantics, pages 315–332. New York Academic Press. Tokunaga Takenobu, Iida Ryu, Terai Asuka, and Kuriyama Naoko. 2012. The REX corpora: A collection of multimodal corpora of referring expressions in collaborative problem solving dialogues. In Proceedings of LREC. Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, Devi Parikh, and Gal Chechik. 2017. Context-aware captions from context-agnostic supervision. In Proceedings of CVPR. Etsuko Yoshida. 2011. Referring expressions in English and Japanese: patterns of use in dialogue processing, volume 208. John Benjamins Publishing. Sina Zarrieß, Julian Hough, Casey Kennington, Ramesh Manuvinakurike, David DeVault, Raquel Fern´andez, and David Schlangen. 2016. PentoRef: A Corpus of Spoken References in Task-oriented Dialogues. In Proceedings of LREC. Appendices A Task Setup Image Sets The images used in the PhotoBook task are taken from the MS COCO 2014 Trainset (Lin et al., 2014). Images in MS COCO were collected from the Flickr9 image repository, which contains labelled photos predominantly uploaded by amateur photographers. The pictures largely are snapshots of everyday situations, placing objects in a natural and often rich context (hence the name Common Objects in COntext) instead of showing an iconic view of objects. In the MS COCO Trainset, images are manually annotated with the outlines of the depicted objects as well as their object categories. We use this information to select similar pictures to display in the PhotoBook task. Through the filtering described in Section 3, we obtained 30 sets of similar images with different pairings of their most prominent objects. In total there are 26 unique object categories in the image sets. The most frequent object category in the image sets is person, which is one of the two main objects in 19 sets. Specification of Games We developed a simple function to select which images of a set should be shown to which participant in which round of a game in order to guarantee that the task setup elicits sufficient image (re-)references for collecting co-reference chains. In this function, the 12 images in a set of similar photographs are randomly indexed and then assigned to a participant’s display based on the schema displayed in Table 4. 9https://www.flickr.com/ With this schema, each photograph is displayed exactly five times while the order of images and the order of rounds can be randomised to prevent participants from detecting patterns in the display. Each of these sets then is duplicated and assigned a different selection of highlighted images to obtain the 60 game sets of the PhotoBook task. While most highlighted images recur five times during a game, they can be highlighted for both participants in the same round. As a result, any given image is highlighted in an average of 3.42 rounds of a game (see Table 5 for the highlighting schema). Round Participant A Participant B 1 1, 2, 3, 4, 5, 6 1, 2, 3, 4, 7, 8 2 1, 3, 6, 7, 9, 10 2, 3, 6, 7, 9, 11 3 4, 5, 7, 10, 11, 12 2, 3, 6, 7, 9, 11 4 1, 2, 5, 8, 11, 12 1, 4, 5, 8, 10, 12 5 5, 6, 8, 9, 11, 12 3, 7, 9, 10, 11, 12 Table 4: Assignment of image IDs to the different participants and rounds of a game schema. The order of rounds and the arrangement of images on the participant’s display can be randomised without effect on the game setup. Game Round Statistics 1 2 3 4 5 H ID A B A B A B A B A B T 1 2 R 1 1 1 1 1 1 5 5 0 3 2 2 2 2 2 2 5 0 5 4 3 1 1 1 1 1 5 5 0 3 4 2 2 2 1 2 5 4 1 3 5 1 1 1 1 1 5 0 5 4 6 2 2 2 2 2 5 5 0 4 7 1 1 1 1 1 5 0 5 3 8 2 1 2 1 1 5 2 3 3 9 2 2 1 2 2 5 4 1 3 10 2 2 2 2 2 5 5 0 4 11 1 1 1 1 1 5 0 5 4 12 2 2 2 2 2 5 5 0 3 Table 5: Schema of referent image highlighting in the PhotoBook task. The left part of the table indicates whether a given image is highlighted for one of the two participants (A and B) in a given game round in either game 1 or 2. T indicates the total count of highlights (which is 5 always), H counts the highlights per game and R the number of rounds that an image is highlighted in. B Task Instructions HIT Preview When the PhotoBook task environment is initialised, it publishes a specified number of Human Intelligence Tasks (HITs) titled Game: Detect common images by chatting with another player on Amazon Mechanical Turk (see Figure 6 for a full print of the descriptions). Participants entering the HIT are shown a preview screen with the central task details as shown in Figure 7. Game Round Mechanics The PhotoBook task AMT user interface is designed in such a way that the six images per round are displayed in a 2 × 3 grid, with a coloured bar under each image: If the image is highlighted, this bar is yellow and contains a radio button option for the common and different labels. If they are not highlighted for a player, the bar is greyed out and empty. The submit button is deactivated as long as not all highlighted images have been labelled. As soon as both players submitted their selection, a feedback page is shown where the bars under the highlighted images either colour green to indicate a correct selection or red to indicate a wrong one. Figure 8(b) shows a screenshot of the feedback display. The radio buttons are disabled in the feedback screens so players cannot revise their selection they can however communicate about their mistakes or pass any other feedback to their partner. The title of a page indicates the current page number so participants can always check their progress; the text input field is limited to a maximum of hundred characters to prevent listings of multiple images or overly elaborate descriptions which, if necessary, can be conveyed in a number of subsequent messages. Feedback Questionnaire In order to facilitate a qualitative analysis of dialogue agents developed for the PhotoBook task, we also collect a goldstandard benchmark of participant’s self-reported satisfaction scores. These scores later can be compared with those obtained by pairing human participants with an artificial dialogue agent in order to assess it in a Turing Test-like setting. Following He et al. (2017), we ask participants to rate three statements on a five-point Likert scale (Likert, 1932), ranging from strongly agree to strongly disagree: 1. Overall collaboration with my partner worked well. 2. I understood my partner’s descriptions well. 3. My partner seemed to understand me well. Warming-Up Round During an initial series of pilot runs we observed that for new participants the first round of their first game took significantly longer than any other ones. Although we do expect that participants get more efficient over time, we argue that this effect is largely related to the fact that participants need to get familiar with the task’s mechanics when it is the first time they are exposed to it. In order to control for this effect, we added a warming-up round with three images per participant10 for each pair of new participants (see Figure 8a). This strongly reduced the completion time of new participants’ first game rounds. Matching Participants In order to collect unbiased samples of the referring expression generation process, we aim to prevent both, i) participants completing the same game multiple times (as here they could re-use referring expressions that worked well during the last one) and ii) specific pairs of participants completing multiple games (as they might have established some kind of strategy or code already). We however also aim at designing the task in such a way that the degree of the partner-specificity in established canonical expressions could be assessed. To achieve this, the participant matching should create settings where a re-entering participant is assigned a game with the same image set as in the game before, but paired with a different conversation partner changes (compare for example Brennan and Clark, 1996). In order to maximise the number of this second game setting, we encourage workers to continue playing by paying them a bonus of 0.25 USD for each 2nd, 3rd, 4th and 5th game. Worker Payment The HIT description also details the worker’s payment. We want to provide fair payment to workers, which we calculated based on an average wage of 10 USD per hour (Hara et al., 2018).11 An initial set of runs resulted in an average completion time of 12 minutes, which indicated an expected expense of about 2 USD per participant per game. More experienced workers however managed to complete a full game in six to ten minutes, meaning that for them we would often surpass the 10 USD/h guideline based on this calculation. Other workers – especially new ones - took up to 25 minutes for the first game, which means that they on the other hand would be strongly under-payed with a rigid per-game payment strategy. To mitigate this effect, we developed the following payment schema: Each worker 10Warming-Up image categories are disjoint from the regular PhotoBook image sets. 11See also DynamoWiki. Figure 6: Screenshot of the PhotoBook task AMT HIT details shown to a participant. Figure 7: Screenshot of the PhotoBook task AMT HIT preview page. that completes a full game is payed a base-amount of 1.75 USD – which is indicated in the HIT description. If the game took longer than ten minutes, the participants are payed a bonus amount of 0.10 USD per minute, up to an additional bonus payment of 1.50 USD for 25 or more minutes. In order to not encourage workers to play slowly, we only inform them about this bonus at the end of a HIT. With this bonus and the 20% AMT fee on each transaction, we expected an average cost of about 5 USD per game, which due to connection problems in the framework ultimately accumulated to 6 USD for a completed game. The total cost of the data collection, including pilot runs, was 16,350 USD. C Dataset Samples Through the goal-oriented nature of participants’ interactions in the PhotoBook dataset, we do not only collect image descriptions but rather the full, collaborative process of establishing, grounding and refining referring expressions throughout the subsequent rounds of the PhotoBook task. As a result, we capture a wide range of dialogue acts such as clarification questions, corrections, extensions, (self-)repairs as well as interactions concerning game mechanics. Consider for example the following interactions: A: Man with dog on lap looking at his computer? B: I don’t have that, but could it be a TV in yours? Mine has a man sitting with his dog watching TV. A: yes, TV - sorry! B: Okay. A: Do you have someone on a big motorcycle and their head isn’t visible? A: There is a blue car in the background B: No A: In any of the pictures? B: No A: Okay, thank you (a) Screenshot of the PhotoBook task’s display for one of the participants during the warming-up round. (b) Example screenshot of the PhotoBook AMT feedback display. Figure 8: Example screenshots for a participant’s display during the warming-up round and feedback screen. B: Woman with hot dog A: Older girl with glasses holding a hot dog? B: sitting A: Yeah A: Do you have a picture with a lady in a fancy dress standing by a motorcycle? B: no B: wait B: yes, in black? A: Yes, it’s a black dress with white trim. A: Is there anything else? B: Do you have the old lady in the white hat/blue pants reading? A: Yes, I do. B: Okay, that’s all for me In most cases, referring expressions agreed upon during the first rounds of a game are further refined and optimised while re-referring to the same target object in later rounds of the game. These refinements often are manifested in an omission of detail while retaining core features of the target object. A: Do you have a boy with a teal coloured shirt with yellow holding a bear with a red shirt? B: Yes – B: Boy with teal shirt and bear with red shirt? A: Yes! – A: Teal shirt boy? B: No Collecting all utterances that refer to a specific target image during a given game creates its coreference chain. Consider the following examples of first (F) and last (L) referring expressions from co-reference chains manually extracted from the PhotoBook dataset: F: Two girls near TV playing wii One in white shirt, one in grey L: Girls in white and grey F: A person that looks like a monk sitting on a bench He’s wearing a blue and white ball cap L: The monk F: A white, yellow, and blue bus being towed by a blue tow truck L: Yellow/white bus being towed by blue D Reference Chain Extraction As explained in Section 6, instead of collecting coreference chains from manual annotation, we use a heuristics to automatically extract reference chains of dialogue segments likely to contain referring expressions to a chain’s target image. We consider participants’ image labelling actions to signal that a previously discussed target image was identified as either common or different and therefore concluding the current dialogue segment. Due to the spontaneous and unrestricted nature of the PhotoBook dialogues, these labelling actions however do not always indicate segment boundaries as cleanly as possible. To improve the quality of extracted dialogue segments and reference chains, we therefore developed a more context-sensitive heuristics to automate segmentation. The heuristics is implemented as a binary decision tree that uses labelling actions as well as any preceding and subsequent messages and additional labelling actions to better decide on segment boundaries and associated target images. It considers 32 combinations of eight different factors. The first case of the heuristics, for example, states that if 1. the current turn is a message, 2. the previous turn was an image labelling action, 3. the previous turn was by the other participant, 4. the next turn is an image selection action, 5. the next turn is by the current participant, 6. the next labelling action assigns a common label, 7. the other participant’s previous labelling and the current participant’s next labelling address the same target image, and 8. there is a non-empty, currently developing dialogue segment, then we assume that after one speaker selected an image as common, the other speaker makes one utterance and marks the same image as common, which is resolved by saving the currently developing segment with the common image as referent and initialising a new segment with the trailing utterance of the second speaker. This prevents creating a segment with just the trailing utterance (that cannot be a complete segment) which would be the naive decision if segmenting was based solely on labelling actions. Other cases include whether the next turn is a message by the other participant followed by them labelling a second image as different (likely to indicate that two images were discussed and the segment should be extended by the following message as well as the second target image) or whether none of the preceding and subsequent turns contains labelling actions (indicating an ongoing dialogue segment). The following shows a typical example of an automatically extracted chain of dialogue segments associated with the image in Figure 9: B: Hello A: Hi A: Do you have a woman with a black coat with buttons, glasses and a piece of pizza on table B: no Figure 9: Sample image MS COCO #449904. A: Lady with black shirt, glasses with pizza on table? B: yes A: Table with orange bowl with lemons and liquor, cups? B: no A: Orange bowl with lemons, liquor? B: lady pizza A: No lady pizza B: yes B: woman and pizza A: Empty kitchen wood coloured cabinets? A: No woman pizza B: no About 72% of all segments are assigned to a single co-reference chain, 25% were automatically assigned to co-reference chains of two different target images and the remaining 3% to 3 or more chains. E Reference Resolution Experiments Data and Results In addition to the results reported on Table 3 in Section 7, which concern the target images in the test set, here we report the scores for target images on the validation set (Table 6) and the scores for non-target images (Table 7). The latter constitute the large majority of candidate images, and thus results are substantially higher for this class. Model Precision Recall F1 NO HISTORY 56.37 75.91 64.70 HISTORY 56.32 78.10 65.45 NO IMAGE 34.61 62.49 44.55 Table 6: Results for target images in the validation set. Model Precision Recall F1 NO HISTORY 95.34 (95.37) 89.48 (89.42) 92.31 (92.30) HISTORY 95.61 (95.76) 89.26 (89.10) 92.33 (92.31) No image 92.24 (92.10) 79.33 (78.74) 85.30 (84.90) Table 7: Results for non-target images in the test set (and the validation set, in brackets). Finally, Table 8 reports the overall number of reference chains in the dataset broken down by length, that is, by the number of dialogue segments they contain. Length (# segments) of the reference chains Split 1 2 3 4 5 6 7 Train 1783 1340 3400 4736 1322 110 3 Val 398 295 754 1057 281 30 1 Test 400 296 754 1057 281 23 0 Table 8: Total number of reference chains per length (i.e., # segments in the chain) in each of the data splits. Qualitative Analysis Figures 10 and 11 show the distractor images for the examples provided in Figure 5 and discussed in Section 7.4. Figure 10: Set of distractors for the target image and segment to be resolved on the left-hand side of Fig. 5. Figure 11: Set of distractors for the target image and segment to be resolved on the right-hand side of Fig. 5.
2019
184
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1911–1922 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1911 Continual and Multi-Task Architecture Search Ramakanth Pasunuru and Mohit Bansal UNC Chapel Hill {ram, mbansal}@cs.unc.edu Abstract Architecture search is the process of automatically learning the neural model or cell structure that best suits the given task. Recently, this approach has shown promising performance improvements (on language modeling and image classification) with reasonable training speed, using a weight sharing strategy called Efficient Neural Architecture Search (ENAS). In our work, we first introduce a novel continual architecture search (CAS) approach, so as to continually evolve the model parameters during the sequential training of several tasks, without losing performance on previously learned tasks (via blocksparsity and orthogonality constraints), thus enabling life-long learning. Next, we explore a multi-task architecture search (MAS) approach over ENAS for finding a unified, single cell structure that performs well across multiple tasks (via joint controller rewards), and hence allows more generalizable transfer of the cell structure knowledge to an unseen new task. We empirically show the effectiveness of our sequential continual learning and parallel multi-task learning based architecture search approaches on diverse sentence-pair classification tasks (GLUE) and multimodal-generation based video captioning tasks. Further, we present several ablations and analyses on the learned cell structures.1 1 Introduction Architecture search enables automatic ways of finding the best model architecture and cell structures for the given task or dataset, as opposed to the traditional approach of manually choosing or tuning among different architecture choices, which introduces human inductive bias or is nonscalable. Recently, this idea has been successfully 1All our code and models publicly available at: https: //github.com/ramakanth-pasunuru/CAS-MAS applied to the tasks of language modeling and image classification (Zoph and Le, 2017; Zoph et al., 2018; Cai et al., 2018; Liu et al., 2017, 2018). The first approach of architecture search involved an RNN controller which samples a model architecture and uses the validation performance of this architecture trained on the given dataset as feedback (or reward) to sample the next architecture. Some recent attempts have made architecture search more computationally feasible (Negrinho and Gordon, 2017; Baker et al., 2017) via treestructured search space or Q-learning with an ϵgreedy exploration, and further improvements via a weight-sharing strategy called Efficient Neural Architecture Search (ENAS) (Pham et al., 2018). In this work, we extend the architecture search approach to an important paradigm of transfer learning across multiple data sources: continual learning. The major problem in continual learning is catastrophic forgetting. For this, we introduce a novel ‘continual architecture search’ (CAS) approach, where the model parameters evolves and adapts when trained sequentially on a new task while maintaining the performance on the previously learned tasks. For enabling such continual learning, we formulate a two-step graphinitialization approach with conditions based on block sparsity and orthogonality. Another scenario of transfer learning or generalization that we explore is one in which we are given multiple tasks in parallel and have to learn a single cell that is good at all these tasks, and hence allows more generalizable transfer of the cell structure knowledge to a new unseen task. This is inspired by the traditional LSTM cell’s reasonable performance across a wide variety of tasks, and hence we want to automatically search (learn) a better version of such a generalizable single cell structure, via multi-task architecture search (MAS). We achieve this by giving a joint reward from multiple tasks as feed1912 back to the controller. Hence, overall, we present two generalization approaches: CAS learns generalizable model parameters over sequential training of multiple tasks (continual learning), whereas MAS learns a generalizable cell structure which performs well across multiple tasks. For empirical evaluation of our two approaches of continual and multi-task cell learning, we choose three domains of natural language inference (NLI) bi-text classification tasks from the GLUE benchmark (Wang et al., 2018): QNLI, RTE, and WNLI, and three domains of multimodal-generation based video captioning tasks: MSR-VTT (Xu et al., 2016), MSVD (Chen and Dolan, 2011), and DiDeMo (Hendricks et al., 2017). Note that we are the first ones to use the architecture search approach for text classification tasks as well as multimodal conditionedgeneration tasks, which achieves improvements on the strong GLUE and video captioning baselines. Next, for continual learning, we train the three tasks sequentially for both text classification and video captioning (through our continual architecture search method) and show that this approach tightly maintains the performance on the previously-learned domain (also verified via human evaluation), while also significantly maximizing the performance on the current domain, thus enabling life-long learning (Chen and Liu, 2016). For multi-task cell learning, we show that the cell structure learned by jointly training on the QNLI and WNLI tasks, performs significantly better on the RTE dataset than the individuallylearned cell structures. Similarly, we show that the cell structure learned from jointly training on the MSR-VTT and MSVD video captioning datasets performs better on the DiDeMo dataset than the individually-learned cell structures. Finally, we also present various analyses for the evolution of the learned cell structure in the continual learning approach, which preserves the properties of certain edges while creating new edges for new capabilities. For our multi-task learning approach, we observe that the joint-reward cell is relatively less complex than the individual-task cells in terms of the number of activation functions, which intuitively relates to better generalizability. 2 Related Work Neural architecture search (NAS) has been recently introduced for automatic learning of the model structure for the given dataset/task (Zoph and Le, 2017; Zoph et al., 2018), and has shown good improvements on image classification and language modeling. NAS shares some similarity to program synthesis and inductive programming (Summers, 1986; Biermann, 1978), and it has been successfully applied to some simple Q&A tasks (Liang et al., 2010; Neelakantan et al., 2015; Andreas et al., 2016; Lake et al., 2015). NAS was made more computationally feasible via tree-structured search space or Q-learning with ϵ-greedy exploration strategy and experience replay (Negrinho and Gordon, 2017; Baker et al., 2017), or a weight-sharing strategy among search space parameters called Efficient Neural Architecture Search (ENAS) (Pham et al., 2018). We explore architecture search for text classification and video caption generation tasks and their integration to two transfer learning paradigms of continual learning and multi-task learning. The major problem in continual learning is catastrophic forgetting. Some approaches addressed this by adding regularization to penalize functional or shared parameters’ change and learning rates (Razavian et al., 2014; Li and Hoiem, 2017; Hinton et al., 2015; Jung et al., 2016; Kirkpatrick et al., 2017; Donahue et al., 2014; Yosinski et al., 2014). Others proposed copying the previous task and augmenting with new task’s features (Rusu et al., 2016), intelligent synapses to accumulate task-related information (Zenke et al., 2017), or online variational inference (Nguyen et al., 2017). Also, Yoon et al. (2018) proposed a dynamically expandable network based on incoming new data. In our work, we introduce ‘continual architecture search’ by extending the NAS paradigm to avoid catastrophic forgetting via block-sparsity and orthogonality constraints, hence enabling a form of life-long learning (Chen and Liu, 2016). To the best of our knowledge, our paper is the first to extend architecture search to a continual incoming-data setup. Elsken et al. (2019) and So et al. (2019) proposed evolutionary architecture search algorithms that dynamically allocate more resources for promising architecture candidates, but these works are different from us in that they do not consider the case where we have continual incoming-data from different data sources, but instead focus on the continual evolution of the model search for efficiency purposes. Multi-task learning (MTL) is primarily used to 1913 improve the generalization performance of a task by leveraging knowledge from related tasks (Caruana, 1998; Collobert and Weston, 2008; Girshick, 2015; Luong et al., 2015; Ruder et al., 2017; Augenstein et al., 2018; Guo et al., 2018; Oh et al., 2017; Ruder and Plank, 2017). In similar generalization spirit of multi-task learning, we present multi-task architecture learning based on performance rewards from multiple tasks, so as to find a single cell structure which can generalize well to a new unseen task. 3 Architecture Search for Text Classification and Generation In this section, we first discuss how we adapt ENAS (Pham et al., 2018) for modeling our bitext classification and multimodal video captioning tasks. Next, we introduce our continual and multi-task approaches of transfer learning leveraging architecture search. 3.1 ENAS Algorithm Our initial architecture search approach is based on the recent Efficient Neural Architecture Search (ENAS) method of Pham et al. (2018), but modeled for text classification and generation-based video captioning. Fig. 1 presents the ENAS controller for sampling an RNN cell structure, which we use to learn the two encoders of our text classification model or encoder-decoder for our video captioning model. The controller is a simple LSTM-RNN and the classifier encoder’s or video captioning encoder-decoder’s RNN cell structure is based on the combination of N nodes indexed by h(t) 1 , h(t) 2 , .., h(t) N (edges between nodes represent weight parameters) and activation functions (ReLU, tanh, sigmoid, identity), where t denotes the time step. For node h(t) 1 , there are two inputs: x(t) (input signal) and h(t−1) N (output from previous time-step), and the node computations are: c(t) 1 = sigmoid(x(t) · W (x,c) + h(t−1) N · W (c) 0 ) (1) h(t) 1 = c(t) 1 ⊙f1(x(t)·W (x,h)+h(t−1) N ·W (h) 1 ) + (1 −c(t) 1 ) ⊙h(t−1) N (2) where f1 is the activation function. Node hl, where l ∈{2, 3, .., N}, receives input from node jl where jl ∈{h1, h2, .., hl−1}, and the computation is defined as follows: c(t) l = sigmoid(h(t) jl · W (c) l,jl) (3) ReLU ReLU tanh ReLU ReLU ReLU tanh 1 1 2 2 1 1 0 CONTROLLER label distribution Max-pooling Max-pooling Concatenation Sentence 1 Sentence 2 (a) Text classification ENAS. ReLU ReLU tanh ReLU ReLU ReLU tanh 1 1 2 2 1 1 0 CONTROLLER Video Encoder Caption Decoder (b) Video captioning ENAS. Figure 1: Architecture search models for bi-text classification and video caption generation tasks. h(t) l = c(t) l ⊙fl(h(t) jl ·W (h) l,jl )+(1−c(t) l )⊙h(t) jl (4) During training, we alternately train the model parameters and controller parameters. First, we sample a Directed Acyclic Graph (DAG) structure from the controller at every mini-batch and use it to update the weight parameters of the task’s RNN nodes/parameters. Next, we sample a DAG from the controller and measure the (validation) performance of that structure based on this new updated state of the task model, and use this performance as a reward to allow the controller to update its own parameters. We repeat this alternate training procedure until the model converges. Later, we select the DAG structure with the best performance and use it to retrain the model from scratch. 3.2 ENAS for Bi-Text Classification For our NLI text classification tasks, we are given the sentence pair as input, and we have to classify it as entailment or not. For a strong base model, we follow Conneau et al. (2017) model, and use bidirectional LSTM-RNN encoders to encode both the sentences and then we do max-pooling on the outputs from these encoders. Let v represent the maxpooling output from the first sentence encoder and 1914 u represent the max-pooling output from the second sentence encoding. The joint representation h is defined as h = [u; v; |u −v|; u ⊙v]. The final representation is linearly projected to the label classes, and then fed through softmax to get the final class distribution. Fig. 1a presents an overview of our text classification model along with ENAS controller for sampling an RNN cell structure. We sample an RNN cell structure from the ENAS controller and use it in the two recurrent encoders of the bi-text classification model. In the first stage, we learn the best cell structure, by sampling multiple cell structures and giving the corresponding validation accuracy as the feedback reward to the controller. In the second stage, we use the best cell structure from the stage-1 to retrain the text classification model from scratch. 3.3 ENAS for Conditioned Generation Next, we go beyond text classification, and look at conditioned text generation with ENAS, where we choose the task of video-conditioned text generation (also known as video captioning) so as to also bring in a multi-modality aspect. For a strong baseline, we use a sequence-to-sequence model with an attention mechanism similar to Pasunuru and Bansal (2017a), where we encode the video frames as a sequence into a bidirectional LSTM-RNN and decode the caption through another LSTM-RNN (see Fig. 1b). Our attention mechanism is similar to Bahdanau et al. (2015), where at each time step t of the decoder, the LSTM hidden state st is a non-linear function of previous time step’s decoder hidden state st−1 and generated word wt−1, and the context vector ct which is a weighted combination of the encoder hidden states {hi}. These weights αt, are defined as: αt,i = exp(et,i) Pn k=1 exp(et,k) (5) The attention function et,i = wT tanh(Wahi + Uast−1 + ba), where w, Wa, Ua, ba are learned parameters. Fig. 1b presents our video captioning model along with ENAS controller. Here, we sample an RNN cell structure from the ENAS controller and use it for both encoder and decoder, and rest of the ENAS procedure is similar to Sec. 3.2. 4 Continual Architecture Search (CAS) We introduce a novel continual learning paradigm on top of architecture search, where the RNN cell structure evolves when trained on new incoming data/domains, while maintaining the performance on previously learned data/domains (via our block-sparsity and orthogonality conditions discussed below), thus enabling life-long learning (Chen and Liu, 2016). Let θ1,k ∈θ1 and θ2,k ∈θ2 (where k denotes model parameters) be the learned model parameters for task T when independently trained on datasets d1 and d2. Then, we can say that θ2,k = θ1,k + ψ2,k, where, ψ2,k is the change in the model parameters of θ1,k when trained independently on d2. There are infinitely many possible local optimal solutions for ψ2,k, hence in our continual learning approach, we want to learn the parameters ψ2,k when training on dataset d2 such that it will not affect the performance of the task w.r.t. dataset d1. For this, we formulate two important conditions: Condition 1 When training the model on dataset d1, we constrain the model parameters θ1,k ∈ Rm×n to be sparse, specifically, to be block sparse, i.e., minimize Pm i=1 |(||θ1,k[i, :]||2)|1. Here, ||·||2 represents the l2 norm and ||·||1 represents the l1 norm. l2 and l1 norms are efficient in avoiding over-fitting; however, they are not useful for compact representation of the network. Scardapane et al. (2017) proposed group sparsity in the neural networks to completely disconnect some neurons. Our block sparse condition is inspired from their work. This sparsity condition is also useful for our continual learning approach which we discuss in Condition 2. Condition 2 When training the model on dataset d2, we start from θ1,k, keep it constant, and update ψ2,k such that: 1. ψ2,k is block sparse, i.e., minimize Pm i=1 |(||ψ2,k[i, :]||2)|1. 2. θ1,k and ψ2,k are orthogonal. It is important in the continual learning paradigm that we do not affect the previously learned knowledge. As stated in Condition 1, we find a block sparse solution θ1,k such that we find the solution θ2,k which is close to θ1,k and the new knowledge is projected in orthogonal direction via ψ2,k so that it will not affect the previously learned knowledge, and thus ‘maintain’ the performance on previously learned datasets. We constrain the closeness of θ2,k and θ1,k by constraining ψ2,k to also be block sparse (Condition 2.1). Also, to avoid affecting previously learned 1915 Avg dag1 Avg dag2 Dataset d1 Step-2 Step-1 Dataset d2 Avg dag3 use dag3 use dag1 Step-3 Dataset d3 Test d1 d2 d3 use dag2 Figure 2: Continual architecture search (CAS) approach: green, solid edges (weight parameters) are shared, newlylearned edges are represented with red, dashed edges. knowledge, we constrain θ1,k and ψ2,k to be orthogonal (Condition 2.2). However, strictly imposing this condition into the objective function is not feasible (Bousmalis et al., 2016), hence we add a penalizing term into the objective function as an approximation to the orthogonality condition: Lp(θ2,k) = ||θT 1,k · ψ2,k||2 2. Both Condition 2.1 and 2.2 are mutually dependent, because for two matrices’ product to be zero, they share basis vectors between them, i.e., for an n-dimensional space, there are n basis vectors and if p of those vectors are assigned to one matrix, then the rest of the n −p vectors (or subset) should be assigned to the other matrix.2 If we fill the rest of the rows with zeros, then they are block sparse, which is the reason for using Condition 2.1. Our CAS condition ablation (see Sec. 7.1) shows that both these conditions are necessary for continual learning. Next, we describe the integration of our above continual learning approach with architecture search, where the model continually evolves its cell architecture so as to perform well on the new incoming data, while also tightly maintaining the performance on previously learned data (or domains). Fig. 2 presents an overview of our continual learning integration approach into architecture search for sequential training on three datasets. Initially, given the dataset d1, we train the architecture search model to find the best Directed Acyclic Graph (DAG) structure for RNN cell and model parameters θ1,k under the block sparse condition described above in Sec. 4. We call this step-1, corresponding to dataset d1. Next, when we have a new dataset d2 from a different domain, we further continue to find the best DAG and model parameters θ2,k for best performance on d2, but initialized the parameters with step-1’s parameters θ1,k, and then trained on dataset d2 following Condition 2 (discussed in Sec. 4). We call this 2Note that it is not necessary for the matrix to contain all of the n −p basis vectors, if the matrix rank is less than n, then it may have less than n −p basis vectors. Controller Shared Model 1 2 4 3 Avg Dataset d1 Dataset d2 Dataset dn Sampled ENAS DAG Joint Reward from all datasets r1 r2 r3 Figure 3: Multi-task cell structure learning using joint rewards from n datasets. step-2, corresponding to dataset d2. After the end of step-2 training procedure, for re-evaluating the model’s performance back on dataset d1, we still use the final learned model parameters θ2,k, but with the learned DAG from step-1.3 This is because we cannot use the old step-1 model parameters θ1,k since we assume that those model parameters are not accessible now (assumption for continual learning with large incoming data streams and memory limit for saving large parameter sets). 5 Multi-Task Architecture Search (MAS) In some situations of transfer learning, we are given multiple tasks at once instead of sequentially. In such a scenario, when we train architecture search model on these multiple tasks separately, we get different cell structures on each task which overfit to that task and are not well generalizable. So, instead, we should learn a common cell for multiple tasks which should generalize better to an unseen task. Also, the standard non-architecture search based LSTM-RNN cell performs well across different tasks which shows enough evidence that there exist such architectures that work well across different tasks. 3For evaluating the model’s performance on dataset d2, we obviously use the final learned model parameters θ2,k, and the learned DAG from step-2. 1916 Hence, in our work, we aim to follow a datadriven route to find even better generalizable architectures that perform better than the traditional LSTM-RNN cell, via our multi-task architecture search (MAS) approach, described below. To learn a cell architecture on a task, we provide the performance of the sampled cell structure on the validation set of the given task as reward to the controller. However, our aim is to find a generalizable cell structure which jointly performs well across different tasks/datasets {d1, d2, .., dn}. Hence, during the architecture search training, the joint reward to the controller is a combination of the performance scores of the sampled cell structure on the validation set of all the available/candidate tasks, which is defined as rc = 1 n Pn i=1 ri, where reward ri comes from the validation performance on task/dataset di. Next, for fair generalizability comparison of this multi-task cell structure with other individual task-learned cell structures, we choose a new unseen task which is different from the current candidate tasks and show that the multi-task cell performs better on this unseen task than all task-related cell structures (as well as a non-ENAS LSTM cell). 6 Experimental Setup 6.1 Text Classification Datasets We choose the natural inference datasets of QNLI, RTE, and WNLI from the GLUE (Wang et al., 2018) benchmark to perform experiments for multi-task cell structure and continual architecture search. We use the standard splits provided by (Wang et al., 2018). QNLI Dataset: Question-Answering Natural Language Inference (QNLI) is extracted from the Stanford Question Answering Dataset (Rajpurkar et al., 2016), where they created sentence pair classification task by forming a pair between each question and the corresponding sentence containing the answer. Hence the task is to find whether the given sentence context contains the answer for the given question. In this dataset, we use the standard splits, i.e., 108k examples for training, 5.7k for validation, and 5.7k for testing. RTE Dataset: Recognizing Textual Entailment (RTE) is collected from a series of annual challenges on the task of textual entailment. This dataset spans the news and Wikipedia text. Here, the task is to predict whether the sentence pair is entailment or not. In this dataset, we use the standard splits, i.e., 2.5k examples for training, 276 for validation, and 3k for testing. WNLI Dataset: Winograd Natural Language Inference (WNLI) is extracted from the dataset of Winograd Schema Challenge for reading comprehension task. Original dataset is converted into a sentence pair classification task by replacing the ambiguous pronoun with each possible referent, where the task is to predict if the sentence with the substituted pronoun is entailed by the original sentence. We use 634 examples for training, 71 for validation, and 146 for testing. 6.2 Video Captioning Datasets For the conditioned-generation paradigm, we use three popular multimodal video captioning datasets: MSR-VTT, MSVD, and DiDeMo to perform experiments for continual architecture search and multi-task architecture search. MSR-VTT Dataset: MSR-VTT is a collection of 10, 000 short videos clips collected from a commercial search engine covering 41.2 hours of video and annotated through Amazon Mechanical Turk (AMT). Each video clip has 20 human annotated captions. We used the standard splits following previous work, i.e., 6, 513 video clips as training set, 497 as validation set, and 2, 990 as test set. MSVD Dataset: Microsoft Video Description Corpus (MSVD) is a collection of 1970 short video clips collected in the wild and annotated through Amazon Mechanical Turk (AMT) in different languages. In this work, we use only English language annotations. Each video clip on an average is 10 seconds in length and approximately 40 annotations. We use the standard splits following previous work, i.e., 1, 200 video clips as training set, 100 as validation set, and 670 as test set. DiDeMo Dataset: Distinct Describable Moments (DiDeMo) is traditionally a video localization task w.r.t. given description query (Hendricks et al., 2017). In this work, we use it as a video description task where given the video as input we have to generate the caption. We use the standard splits as provided by Hendricks et al. (2017). 6.3 Evaluation For GLUE tasks, we use accuracy as an evaluation metric following the previous work (Wang et al., 2018). For video captioning tasks, we report four diverse automatic evaluation metrics: METEOR (Denkowski and Lavie, 2014), 1917 CIDEr (Vedantam et al., 2015), BLEU-4 (Papineni et al., 2002), and ROUGE-L (Lin, 2004). We use the standard evaluation code (Chen et al., 2015) to obtain these scores for our generated captions w.r.t. the reference captions. 6.4 Training Details In all our experiments, our hyperparameter choices are based on validation set accuracy for GLUE tasks and an average of the four automatic evaluation metrics (METEOR, CIDEr, BLEU-4, and ROUGE-L) for video captioning tasks. We use same settings for both normal and architecture search models, unless otherwise specified. More details in appendix. 7 Results and Analysis 7.1 Continual Learning on GLUE Tasks Baseline Models: We use bidirectional LSTMRNN encoders with max-pooling (Conneau et al., 2017) as our baseline.4 Further, we used the ELMo embeddings (Peters et al., 2018) as input to the encoders, where we allowed to train the weights on each layer of ELMo to get a final representation. Table 1 shows that our baseline models achieve strong results when compared with GLUE benchmark baselines (Wang et al., 2018).5 On top of these strong baselines, we add ENAS approach. ENAS Models: Next, Table 1 shows that our ENAS models (for all three tasks QNLI, RTE, WNLI) perform better or equal than the nonarchitecture search based models.6 Note that we only replace the LSTM-RNN cell with our ENAS cell, rest of the model architecture in ENAS model is same as our baseline model.7 4We also tried various other models e.g., self-attention and cross-attention, but we found that the max-pooling approach performed best on these datasets. 5We only report single-task (and not 9-task multi-task) results from the GLUE benchmark for fair comparison to our models (even for our multi-task-cell learning experiments in Sec. 7.3, the controller uses rewards from two datasets but the primary task is then trained only on its own data). 6On validation set, our QNLI ENAS model is statistically significantly better than the corresponding baseline with p < 0.01, and statistically equal on RTE and WNLI (where the validations sets are very small), based on the bootstrap test (Noreen, 1989; Efron and Tibshirani, 1994) with 100K samples. Since the test set is hidden, we are not able to calculate the statistical significance on it. 7Note that ENAS random search baseline vs. optimal search validation performance on QNLI, RTE, and WNLI are 73.3 (vs. 74.8), 58.8 (vs. 60.3), and 54.0 (vs. 55.6), respectively, suggesting that the learned optimal cell structure is better than the random cell structure. Models QNLI RTE WNLI PREVIOUS WORK BiLSTM+ELMo (2018) 69.4 50.1 65.1 BiLSTM+ELMo+Attn (2018) 61.1 50.3 65.1 BASELINES Baseline (with ELMo) 73.2 52.3 65.1 ENAS (Architecture Search) 74.5 52.9 65.1 CAS RESULTS CAS Step-1 (QNLI training) 73.8 N/A N/A CAS Step-2 (RTE training) 73.6 54.1 N/A CAS Step-3 (WNLI training) 73.3 54.0 64.4 Table 1: Test results on GLUE tasks for various models: Baseline, ENAS, and CAS (continual architecture search). The CAS results maintain statistical equality across each step. CAS Models: Next, we apply our continual architecture search (CAS) approach on QNLI, RTE, and WNLI, where we sequentially allow the model to learn QNLI, RTE, and WNLI (in the order of decreasing dataset size, following standard transfer setup practice) and the results are as shown in Table 1. We train on QNLI task, RTE task, and WNLI task in step-1, step-2, and step-3, respectively. We observe that even though we learn the models sequentially, we are able to maintain performance on the previously-learned QNLI task in step-2 (74.1 vs. 74.2 on validation set which is statistically equal, and 73.6 vs. 73.8 on test).8 Note that if we remove our sparsity and orthogonality conditions (Sec. 4), the step-2 QNLI performance drops from 74.1 to 69.1 on validation set, demonstrating the importance of our conditions for CAS (see next paragraph on ‘CAS Condition Ablation’ for more details). Next, we observe a similar pattern when we extend CAS to the WNLI dataset (see step-3 in Table 1), i.e, we are still able to maintain the performance on QNLI (as well as RTE now) from step-2 to step-3 (scores are statistically equal on validation set).9 Further, if we compare the performance of QNLI from step-1 to step-3, we see that they are also stat. equal on val set (73.9 vs. 74.2). This shows that our CAS method can maintain the performance of a task in a continual learning setting with several steps. CAS Condition Ablation: We also performed important ablation experiments to understand the 8Note that there is a small drop in QNLI performance for CAS Step-1 vs. ENAS (74.5 vs. 73.8); however, this is not true across all experiments, e.g., in case of RTE, CAS Step-1 is in fact better than its corresponding ENAS model (ENAS: 52.9 vs. CAS Step-1: 53.8). 9On validation set, QNLI step-3 vs. step-2 performance is 73.9 vs. 74.1, which is stat. equal. Similarly, on RTE, step3 vs. step-2 performance is 61.0 vs. 60.6 on validation set, which is again statistically equal. 1918 Model Accuracy on QNLI No Condition with RTE DAG 54.1 No Condition 69.1 Only Condition 2.1 71.5 Only Condition 2.2 69.4 Full Model (Condition 2.1 & 2.2) 74.1 Table 2: Ablation (val) results on CAS conditions. importance of our block sparsity and orthogonality conditions in the CAS approach (as discussed in Sec. 4). Table 2 presents the ablation results of QNLI in step-2 with CAS conditions. Our full model (with both Condition 2.1 and 2.2) achieves a validation performance of 74.1. Next, we separately experimented with each of Condition 2.1 and 2.2 and observe that using only one condition at a time is not able to maintain the performance w.r.t. step-1 QNLI performance (the decrease in score is statistically significant), suggesting that both of these two conditions are important for our CAS approach to work. Further, we remove both conditions and observe that the performance drops to 69.1. Finally, we also replaced the QNLI cell structure with the RTE cell structure along with removing both conditions and the performance further drops to 54.1. This shows that using the cell structure of the actual task is important. Time Comparison: We compare QNLI training time on a 12GB TITAN-X Nvidia GPU. Our baseline non-ENAS model takes 1.5 hours, while our CAS (and MAS) models take approximately the same training time (4 hours) as the original ENAS setup, and do not add extra time complexity. 7.2 Continual Learning on Video Captioning Baselines Models: Our baseline is a sequence-tosequence model with attention mechanism as described in Sec. 3.3. We achieve comparable results w.r.t. SotA (see Table 3), hence serving as a good starting point for the ENAS approach. ENAS Models: Table 3 also shows that our ENAS models (MSR-VTT, MSVD) perform equal/better than non-architecture search based models.10 CAS Models: Next, we apply our continual architecture search (CAS) approach on MSR-VTT and MSVD, where we sequentially allow the model to learn MSR-VTT first and then MSVD, and the results are as shown in Table 3. We observe that even though we learn the models se10Note that ENAS random search performance on MSRVTT test set is C:43.3, B:37.0, R:58.7, M:27.3, AVG: 41.6; and on MSVD test set is C:83.7, B:47.4, R:71.1, M:33.6, AVG: 59.0, suggesting that these are lower than the learned optimal cell structures’ performances shown in Table 3. quentially, we are able to maintain performance on the previously-learned MSR-VTT task in step-2, while also achieving greater-or-equal performance on the current task of MSVD in comparison with the general ENAS approach.11 Human Evaluation: We also performed human comparison of our CAS step-1 vs. step-2 via Amazon MTurk (100 anonymized test samples, Likert 1-5 scale). This gave an overall score of 3.62 for CAS step-1 model vs. 3.55 for CAS step2, which are very close (statistically insignificant with p = 0.32), again showing that CAS step-2 is able to maintain performance w.r.t. CAS step-1. 7.3 Multi-Task Cell Learning on GLUE In these experiments, we first find the best ENAS cell structures for the individual QNLI and WNLI tasks, and use these for training the RTE task. Next, we find a joint cell structure by training ENAS via joint rewards from both QNLI and WNLI datasets. Later, we use this single ‘multitask’ cell to train the RTE task, and the results are as shown in Table 4 (GLUE test results). We also include the LSTM cell and RTE-ENAS cell results for fair comparison. It is clear that the multi-task cell performs better than the single-task cells.12 This shows that a cell learned on multiple tasks is more generalizable to other tasks. 7.4 Multi-Task Cell on Video Captioning In these experiments, we first find the best ENAS cell structures for the individual MSR-VTT and MSVD tasks, and use these cell structures for training the DiDeMo task. Next, we find a single cell structure by training ENAS on both MSRVTT and MSVD datasets jointly. Later, we use this single cell (we call it multi-task cell) to train the DiDeMo task, and the results are as shown in Table 5. It is clear that the multi-task cell performs better than other cell structures, where the multi-task cell performance is comparable w.r.t. the DiDeMo-ENAS cell and better than the other single-task and LSTM cell structures. This shows 11MSR-VTT performance in step-1 and step-2 are stat. equal on CIDEr and ROUGE-L metrics. 12Our multi-task cell and RTE cell performance are statistically equal (61.4 vs. 60.3) and statistically better than the rest of the cells in Table 4, based on the validation set. Note that the multi-task cell does not necessarily need to be better than the RTE cell, because the latter cell will be over-optimized for its own data, while the former is a more generalized cell learned from two other datasets. 1919 Models MSR-VTT MSVD C B R M AVG C B R M AVG Baseline (Pasunuru and Bansal, 2017b) 48.2 40.8 60.7 28.1 44.5 85.8 52.5 71.2 35.0 61.1 ENAS 48.9 41.3 61.2 28.1 44.9 87.2 52.9 71.7 35.2 61.8 CAS Step-1 (MSR-VTT training) 48.9 41.1 60.5 27.5 44.5 N/A N/A N/A N/A N/A CAS Step-2 (MSVD training) 48.4 40.1 59.9 27.1 43.9 88.1 52.4 71.3 35.1 61.7 Table 3: Video captioning results with Baseline, ENAS, and CAS models. Baseline is reproduced numbers from github of Pasunuru and Bansal (2017b) which uses advanced latest visual features (ResNet-152 and ResNeXt-101) for video encoder. C, B, R, M: CIDEr, BLEU-4, ROUGE-L, and METEOR metrics. Cell Structure Performance on RTE LSTM cell 52.3 QNLI cell 52.4 WNLI cell 52.2 RTE cell 52.9 Multi-Task cell 53.9 Table 4: Comparison of MAS cell on RTE task. Cell Structure Performance on DiDeMo M C B R LSTM cell 12.7 26.7 7.6 30.6 MSR-VTT cell 12.9 25.7 7.4 30.3 MSVD cell 12.1 25.2 7.9 30.6 DiDeMO cell 13.1 27.1 7.9 30.9 Multi-Task cell 13.4 27.5 8.1 30.8 Table 5: Comparison of MAS cell on DiDeMO task. that a cell learned on multiple tasks is more generalizable to other tasks. Human Evaluation: We performed a similar human study as Sec. 7.2, and got Likert scores of 2.94 for multi-task cell vs. 2.81 for LSTM cell, which suggests that the multi-task cell is more generalizable than the standard LSTM cell. 7.5 Analysis Evolved Cell Structure with CAS Fig. 4 presents the cell structure in each step for the CAS approach, where we sequentially train QNLI, RTE, and WNLI tasks. Overall, we observe that the cell structures in CAS preserve the properties of certain edges while creating new edges for new capabilities. We notice that the cell structure in step-1 and step-2 share some common edges and activation functions (e.g., inputs to node 0) along with some new edge connections in step-2 (e.g., node 1 to node 3). Further, we observe that the step-3 cell uses some common edges w.r.t. the step-2 cell, but uses different activation functions, e.g., edge between node 0 and node 1 is the same, but the activation function is different. This shows that those edges are learning weights which are stable w.r.t. change in the activation functions. Multi-Task Cell Structure Fig. 5 presents our multi-task MAS cell structure (with joint rewards from QNLI and WNLI), versus the RTE-ENAS x[t] identity (0) identity (1) tanh (2) tanh (3) tanh (4) ReLU (5) avg h[t] h[t-1] (a) Step-1 x[t] identity (0) tanh (1) identity (2) ReLU (3) ReLU (4) avg ReLU (5) h[t-1] h[t] (b) Step-2 x[t] identity (0) identity (1) identity (2) tanh (3) sigmoid (4) avg tanh (5) h[t] h[t-1] (c) Step-3 Figure 4: Learned cell structures for step-1, step-2, and step-3 of continual architecture search for GLUE tasks. x[t] identity (0) tanh (1) identity (2) sigmoid (3) identity (4) identity (5) avg h[t] h[t-1] (a) MAS cell x[t] identity (0) ReLU (1) tanh (2) sigmoid (3) sigmoid (4) tanh (5) avg h[t] h[t-1] (b) RTE cell Figure 5: Learned multi-task & RTE cell structures. cell structure. We observe that the MAS cell is relatively less complex, i.e., uses several identity functions and very few activation functions in its structure vs. the RTE cell. This shows that the individual-task-optimized cell structures are complex and over-specialized to that task, whereas our multi-task cell structures are simpler for generalizability to new unseen tasks. 8 Conclusion We first presented an architecture search approach for text classification and video caption generation tasks. Next, we introduced a novel paradigm of transfer learning by combining architecture search with continual learning to avoid catastrophic forgetting. We also explore multi-task cell learning for generalizability. Acknowledgments We thank the reviewers for their helpful comments. This work was supported by DARPA (YFA17-D17AP00022), and faculty awards from Google, Facebook, and Salesforce. The views contained in this article are those of the authors and not of the funding agency. 1920 References Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Learning to compose neural networks for question answering. In NAACL. Isabelle Augenstein, Sebastian Ruder, and Anders Søgaard. 2018. Multi-task learning of pairwise sequence classification tasks over disparate label spaces. In NAACL. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. 2017. Designing neural network architectures using reinforcement learning. In ICLR. Alan W Biermann. 1978. The inference of regular lisp programs from examples. IEEE transactions on Systems, Man, and Cybernetics, 8(8):585–600. Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In NIPS, pages 343–351. Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang. 2018. Efficient architecture search by network transformation. In AAAI. Rich Caruana. 1998. Multitask learning. In Learning to learn, pages 95–133. Springer. David L Chen and William B Dolan. 2011. Collecting highly parallel data for paraphrase evaluation. In ACL. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Doll´ar, and C Lawrence Zitnick. 2015. Microsoft COCO captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325. Zhiyuan Chen and Bing Liu. 2016. Lifelong machine learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 10(3):1–145. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167. ACM. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In EMNLP. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In EACL. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. 2014. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML, pages 647–655. Bradley Efron and Robert J Tibshirani. 1994. An introduction to the bootstrap. CRC press. Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. 2019. Efficient multi-objective neural architecture search via lamarckian evolution. In ICLR. Ross Girshick. 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448. Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018. Soft layer-specific multi-task summarization with entailment and question generation. In ACL. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR, pages 770–778. Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2017. Localizing moments in video with natural language. In ICCV, pages 5803–5812. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Heechul Jung, Jeongwoo Ju, Minju Jung, and Junmo Kim. 2016. Less-forgetting learning in deep neural networks. arXiv preprint arXiv:1607.00122. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521–3526. Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. 2015. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338. Zhizhong Li and Derek Hoiem. 2017. Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence. Percy Liang, Michael I Jordan, and Dan Klein. 2010. Learning programs: A hierarchical bayesian approach. In ICML, pages 639–646. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 workshop, volume 8. 1921 Chenxi Liu, Barret Zoph, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. 2017. Progressive neural architecture search. arXiv preprint arXiv:1712.00559. Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. 2018. Hierarchical representations for efficient architecture search. In CVPR. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114. Arvind Neelakantan, Quoc V Le, and Ilya Sutskever. 2015. Neural programmer: Inducing latent programs with gradient descent. In ICLR. Renato Negrinho and Geoff Gordon. 2017. Deeparchitect: Automatically designing and training deep architectures. In CVPR. Cuong V Nguyen, Yingzhen Li, Thang D Bui, and Richard E Turner. 2017. Variational continual learning. arXiv preprint arXiv:1710.10628. Eric W Noreen. 1989. Computer-intensive methods for testing hypotheses. Wiley New York. Junhyuk Oh, Satinder Singh, Honglak Lee, and Pushmeet Kohli. 2017. Zero-shot task generalization with multi-task deep reinforcement learning. arXiv preprint arXiv:1706.05064. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL, pages 311–318. Ramakanth Pasunuru and Mohit Bansal. 2017a. Multitask video captioning with video and entailment generation. In ACL. Ramakanth Pasunuru and Mohit Bansal. 2017b. Reinforced video captioning with entailment rewards. In EMNLP. Mat thew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL. Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. 2018. Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP. Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. 2014. Cnn features offthe-shelf: an astounding baseline for recognition. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on, pages 512–519. IEEE. Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. 2017. Sluice networks: Learning what to share between loosely related tasks. arXiv preprint arXiv:1705.08142. Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with bayesian optimization. arXiv preprint arXiv:1707.05246. Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. 2016. Progressive neural networks. arXiv preprint arXiv:1606.04671. Simone Scardapane, Danilo Comminiello, Amir Hussain, and Aurelio Uncini. 2017. Group sparse regularization for deep neural networks. Neurocomputing, 241:81–89. David R So, Chen Liang, and Quoc V Le. 2019. The evolved transformer. arXiv preprint arXiv:1901.11117. Phillip D Summers. 1986. A methodology for lisp program construction from examples. In Readings in artificial intelligence and software engineering, pages 309–316. Elsevier. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. CIDEr: Consensus-based image description evaluation. In CVPR, pages 4566–4575. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Saining Xie, Ross Girshick, Piotr Doll´ar, Zhuowen Tu, and Kaiming He. 2017. Aggregated residual transformations for deep neural networks. In CVPR, pages 5987–5995. IEEE. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. MSR-VTT: A large video description dataset for bridging video and language. In CVPR, pages 5288– 5296. IEEE. Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. 2018. Lifelong learning with dynamically expandable networks. arXiv preprint arXiv:1708.01547. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? In NIPS, pages 3320–3328. Friedemann Zenke, Ben Poole, and Surya Ganguli. 2017. Continual learning through synaptic intelligence. In ICML, pages 3987–3995. Barret Zoph and Quoc V Le. 2017. Neural architecture search with reinforcement learning. In ICLR. Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. 2018. Learning transferable architectures for scalable image recognition. In CVPR. 1922 Appendix A Training Details We use Adam optimizer (Kingma and Ba, 2015) and a mini-batch size of 64. We set the dropout to 0.5. In all of our architecture search models, we use 6 nodes. For the controller’s optimization, we again use Adam optimizer with a learning rate of 0.00035. For GLUE tasks, we use 256 dimensions for the hidden states of the RNNs, and for word embeddings we use ELMo representations (Peters et al., 2018), where we down project the 1024 dimensions ELMo embeddings to 256. We use a learning rate of 0.001, and both encoder RNNs are unrolled to 50 steps. For CAS conditions, we set the coefficients for block-sparsity and orthogonality conditions to 0.001 and 0.001, respectively. For video captioning tasks, we use hidden state size of 1024 and word embedding size of 512. For visual features, we use a concatenation of both ResNet-152 (He et al., 2016) and ResNeXt101 (Xie et al., 2017) image features. We use a learning rate of 0.0001, and we unroll the video encoder and caption decoder to 50 and 20 steps, respectively. For CAS conditions, we set both the coefficients of block-sparsity and orthogonality conditions to 0.0001.
2019
185
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1923–1934 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1923 Semi-supervised Stochastic Multi-Domain Learning using Variational Inference Yitong Li Timothy Baldwin School of Computing and Information Systems The University of Melbourne, Australia [email protected] {tbaldwin,tcohn}@unimelb.edu.au Trevor Cohn Abstract Supervised models of NLP rely on large collections of text which closely resemble the intended testing setting. Unfortunately matching text is often not available in sufficient quantity, and moreover, within any domain of text, data is often highly heterogenous. In this paper we propose a method to distill the important domain signal as part of a multi-domain learning system, using a latent variable model in which parts of a neural model are stochastically gated based on the inferred domain. We compare the use of discrete versus continuous latent variables, operating in a domain-supervised or a domain semi-supervised setting, where the domain is known only for a subset of training inputs. We show that our model leads to substantial performance improvements over competitive benchmark domain adaptation methods, including methods using adversarial learning. 1 Introduction Text corpora are often collated from several different sources, such as news, literature, microblogs, and web crawls, raising the problem of learning NLP systems from heterogenous data, and how well such models transfer to testing settings. Learning from these corpora requires models which can generalise to different domains, a problem known as transfer learning or domain adaptation (Blitzer et al., 2007; Daum´e III, 2007; Joshi et al., 2012; Kim et al., 2016). In most stateof-the-art frameworks, the model has full knowledge of the domain of instances in the training data, and the domain is treated as a discrete indicator variable. However, in reality, data is often messy, with domain labels not always available, or providing limited information about the style and genre of text. For example, web-crawled corpora are comprised of all manner of text, such as news, marketing, blogs, novels, and recipes, however the type of each document is typically not explicitly specified. Moreover, even corpora that are labelled with a specific domain might themselves be instances of a much more specific area, e.g., “news” articles will cover politics, sports, travel, opinion, etc. Modelling these types of data accurately requires knowledge of the specific domain of each instance, as well as the domain of each test instance, which is particularly problematic for test data from previously unseen domains. A simple strategy for domain learning is to jointly learn over all the data with a single model, where the model is not conditioned on domain, and directly maximises p(y|x), where x is the text input, and y the output (e.g. classification label). Improvements reported in multi-domain learning (Daum´e III, 2007; Kim et al., 2016) have often focused on learning twin representations (shared and private representations) for each instance. The private representation is modelled by introducing a domain-specific channel conditional on the domain, and the shared one is learned through domain-general channels. To learn more robust domain-general and domain-specific channels, adversarial supervision can be applied in the form of either domain-conditional or domain-generative methods (Liu et al., 2016; Li et al., 2018a). Inspired by these works, we develop a method for the setting where the domain is unobserved or partially observed, which we refer to as unsupervised and semi-supervised, respectively, with respect to domain. This has the added benefit of affording robustness where the test data is drawn from an unseen domain, through modelling each test instance as a mixture of domains. In this paper, we propose methods which use latent variables to characterise the domain, by modelling the discriminative learning problem p(y|x) = R z p(z|x)p(y|x, z), where z encodes the domain, which must be marginalised 1924 out when the domain is unobserved. We propose a sequence of models of increasing complexity in the modelling of the treatment of z, ranging from a discrete mixture model, to a continuous vector-valued latent variable (analogous to a topic model; Blei et al. (2003)), modelled using Beta or Dirichlet distributions. We show how these models can be trained efficiently, using either direct gradient-based methods or variational inference (Kingma et al., 2014), for the respective model types. The variational method can be applied to domain and/or label semi-supervised settings, where not all components of the training data are fully observed. We evaluate our approach using sentiment analysis over multi-domain product review data and 7 language identification benchmarks from different domains, showing that in out-of-domain evaluation, our methods substantially improve over benchmark methods, including adversariallytrained domain adaptation (Li et al., 2018a). We show that including additional domain unlabelled data gives a substantial boost to performance, resulting in transfer models that often outperform domain-trained models, to the best of our knowledge, setting a new state of the art for the dataset. 2 Stochastic Domain Adaptation In this section, we describe our proposed approaches to Stochastic Domain Adaptation (SDA), which use latent variables to represent an implicit ‘domain’. This is formulated as a joint model of output classification label, y and latent domain z, which are both conditional on x, p(y, z|x) = pφ(z|x)pθ(y|x, z) . The two components are the prior, pφ(z|x), and classifier likelihood, pθ(y|x, z), which are parameterised by φ and θ, respectively. We propose several different choices of prior, based on the nature of z, that is, whether it is: (i) a discrete value (“DSDA”, see Section 2.2); or (ii) a continuous vector, in which case we experiment with different distributions to model p(z|x) (“CSDA”, see Section 2.3). 2.1 Stochastic Channel Gating For all of our models the likelihood, pθ(y|x, z), is formulated as a multi-channel neural model, where z is used as a gate to select which channels should be used in representing the input. The model comprises k channels, with each channel computing an independent hidden representation, hi = CNNi(x; θ)|k i=1 using a convolutional neural network.1 The value of z is then used to select the channel, by computing h = Pk i=1 zkhi, where we assume z ∈Rk is a continuous vector. For the discrete setting, we represent integer z by its 1-hot encoding z, in which case h = hz. The final step of the likelihood passes h through a MLP with a single hidden layer, followed by a softmax, which is used to predict class label y. 2.2 Discrete Domain Identifiers We now turn to the central part of our method, the prior component. The simplest approach, DSDA (see Figure 1a), uses a discrete latent variable, i.e., z ∈[1, k] is an integer-valued random variable, and consequently the model can be considered as a form of mixture model. This prior predicts z given input x, which is modelled using a neural network with a softmax output. Given z, the process of generating y is as described above in Section 2.1. The discrete model can be trained for the maximum likelihood estimate using the objective, log p(y|x) = log k X z=1 pφ(z|x)pθ(y|x, z), (1) which can be computed tractably,2 and scales linearly in k. DSDA can be applied with supervised or semisupervised domains, by maximising the likelihood p(z = d|x) when the ground truth domain d is observed. We refer to this setting as “DSDA +sup.” or “DSDA +semisup”, respectively, noting that in this setting we assume the number of channels, k, is equal to the known inventory of domains, D. 2.3 Continuous Domain Identifiers For the DSDA model to work well requires sufficiently large k, such that all the different types of data can be clearly separated into individual mixture components. When there is not a clear delineation between domains, the inferred domain posterior is likely to be uncertain, and the approach 1Our approach is general, and could be easily combined with other methods besides CNNs. 2This arises from the finite summation in (1), which requires each of the k components to be computed separately, and their results summed. This procedure permits standard gradient back-propagation. 1925 x CNNk(θk) CNN1(θ1) · · · hk h1 · · · J h y CNN(φ) p z (a) DSDA x CNNk(θk) CNN1(θ1) · · · hk h1 · · · J h y CNN(σ) CNN(φ) p q ∼ bz y, d DKL (b) CSDA Figure 1: Model architectures for latent variable models, DSDA and CSDA, which differ in the treatment of the latent variable, which is discrete (d ∈[1, k]), or a continuous vector (ˆz ∈Rk). The lower green model components show k independent convolutional network components, and the blue and yellow component the prior, p, and the variational approximation, q, respectfully. The latent variable is used to gate the k hidden representations (shown as J), which are then used in a linear function to predict a classification label, y. During training CSDA draws samples (∼) from q, while during inference, samples are drawn from p. reduces to an ensemble technique. Thus, we introduce the second modelling approach as Continuous domain identifiers (CSDA), inspired by the way in which LDA models the documents as mixtures of several topics (Blei et al., 2003). A more statistically efficient method would be to use binary functions as domain specifiers, i.e., z ∈{0, 1}k, effectively allowing for exponentially many domain combinations (2k). Each element of the domain zi acts as a gate, or equivalently, attention, governing whether hidden state hi is incorporated into the predictive model. In this way, individual components of the model can specialise to a very specific topic such as politics or sport, and yet domains are still able to combine both to produce specialised representations, such as the politics of sport. The use of a latent bit-vector renders inference intractable, due to the marginalisation over exponentially many states. For this reason, we instead make a continuous relaxation, such that z ∈Rk with each scalar zi being drawn from a probability distribution parameterised as a function of the input x. These functions can learn to relate aspects of x with certain domain indexes, e.g., the use of specific words like baseball and innings relate to a domain corresponding to “sport”, thereby allowing the text domains to be learned automatically. Several possible distributions can be used to model z ∈Rk. Here we consider the following distributions: Beta which bounds all elements to the range [0, 1], such that z lies in a hyper-cube; Dirichlet which also bounds all elements, as for Beta, however z are also constrained to lie in the probability simplex. In both cases,3 each dimension of z is controlled by different distribution parameters, themselves formulated as different non-linear functions of x. We expect the Dirichlet model to perform the best, based on their widespread use in topic models, and their desirable property of generating a normalised vector, resembling common attention mechanisms (Bahdanau et al., 2015). Depending on the choice of distribution, the prior is modelled as p(z|x) = Beta αααB,βββB (2a) or p(z|x) = Dirichlet α0αααD , (2b) where the prior parameters are parameterised as neural networks of the input. For the Beta prior, αααB = elu(fα,B(x)) + 1 (3a) βββB = elu(fβ,B(x)) + 1 , (3b) where elu(·) + 1 is an element-wise activation function which returns a positive value (Clevert et al., 2016), and fω(·) is a nonlinear function with parameters ω—here we use a CNN. The Dirichlet prior uses a different parameterisation, α0 = exp(fD,0(x)) (4a) αααD = sigmoid(fD(x)) , (4b) 3We also compared Gamma distributions, but they underperformed Beta and Dirichlet models. 1926 where α0 is a positive-valued overall concentration parameter, used to scale all components in (2b), thus capturing overall sparsity, while αααD models the affinity to each channel. 2.4 Variational Inference Using continuous latent variables, as described in Section 2.3, gives rise to intractable inference; for this reason we develop a variational inference method based on the variational auto-encoder (Kingma and Welling, 2014). Fitting the model involves maximising the evidence lower bound (ELBO), log pφ,θ(y|x) = log Z z pφ(z|x)pθ(y|z, x) ≥E qσ [log pθ(y|z, x)] (5) −λDKL qσ(z|x, y, d)||pφ(z|x)  , where qσ is the variational distribution, parameterised by σ, chosen to match the family of the prior (Beta or Dirichlet) and λ is a hyperparameter controlling the weight of the KL term. The ELBO in (5) is maximised with respect to σ, φ and θ, using stochastic gradient ascent, where the expectation term is approximated using a single sample, ˆz ∼qσ, which is used to compute the likelihood directly. Although it is not normally possible to backpropagate gradients through a sample, which is required to learn the variational parameters σ, this problem is usually sidestepped using a reparameterisation trick (Kingma and Welling, 2014). However this method only works for a limited range of distributions, most notably the Gaussian distribution, and for this reason we use the implicit reparameterisation gradient method (Figurnov et al., 2018), which allows for inference with a variety of continuous distributions, including Beta and Dirichlet. We give more details of the implicit reparameterisation method in Appendix A.2. The variational distribution q, is defined in an analagous way to the prior, p, see (2–4b), i.e., using a neural network parameterisation for the distribution parameters. The key difference is that q conditions not only on x but also on the target label y and domain d. This is done by embedding both y and d, which are concatenated with a CNN encoding of x, and then transformed into the distribution parameters. Semi-supervised learning with respect to the domain can easily be facilitated by setting d to the domain identifier when it is observed, otherwise using a sentinel value d = UNK, for domain-unsupervised instances. The same trick is used for y, to allow for vanilla semi-supervised learning (with respect to target label). The use of y and d allows the inference network to learn to encode these two key variables into z, to encourage the latent variable, and thus model channels, to be informative of both the target label and the domain. This, in concert with the KL term in (5), ensures that the prior, p, must also learn to discriminate for domain and label, based solely on the input text, x. For inference at test time, we assume that only x is available as input, and accordingly the inference network cannot be used. Instead we generate a sample from the prior ˆz ∼p(z|x), which is then used to compute the maximum likelihood label, ˆy = arg maxy p(y|x, ˆz). We also experimented with Monte Carlo methods for test inference, in order to reduce sampling variance, using: (a) prior mean ¯z = µ; (b) Monte Carlo averaging ¯y = 1 m P i p(y|x, ˆzi) using m = 100 samples from the prior; and (c) importance sampling (Glynn and Iglehart, 1989) to estimate p(y|x) based on sampling from the inference network, q.4 None of the Monte Carlo methods showed a significant difference in predictive performance versus the single sample technique, although they did show a very tiny reduction in variance over 10 runs. This is despite their being orders of magnitude slower, and therefore we use a single sample for test inference hereafter. 3 Experiments 3.1 Multi-domain Sentiment Analysis To evaluate the proposed models, we first experiment with a multi-domain sentiment analysis dataset, focusing on out-of-domain evaluation where the test domain is unknown. We derive our dataset from Multi-Domain Sentiment Dataset v2.0 (Blitzer et al., 2007).5 The task is to predict a binary sentiment label, i.e., positive vs. negative. The unprocessed dataset has more than 20 domains. For our purposes, we filter out domains with fewer than 1k labelled instances 4Importance sampling estimates p(y|x) = Eq[p(y, z|x)/q(z|x, y, d)] for each setting of y using m = 100 samples from q, and then finds the maximising y. This is tractable in our settings as y is a discrete variable, e.g., a binary sentiment, or multiclass language label. 5From https://www.cs.jhu.edu/˜mdredze/ datasets/sentiment/. 1927 Domain F(x, y, d) Y(x, y, ?) apparel 1,000 1,000 baby 950 950 camera & photo 1000 999 health & personal care 1,000 1,000 magazines 985 985 music 1,000 1,000 sports & outdoors 1,000 1,000 toys & games 1,000 1,000 video 1,000 1,000 Table 1: Numbers of instances (reviews) for each training domain in our dataset, under the two categories F (domain and label known) and Y (label known; domain unknown), in which “?” represents the “UNK” token, meaning the given attribute is unobserved. or fewer than 2k unlabelled instances, resulting in 13 domains in total. To simulate the semi-supervised domain situation, we remove the domain attributions for one half of the labelled data, denoting them as domainunlabelled data Y(x, y, ?). The other half are sentiment- and domain-labelled data F(x, y, d). We present a breakdown of the dataset in Table 1.6 For evaluation, we hold out four domains— namely books (“B”), dvds (“D”), electronics (“E”), and kitchen & housewares (“K”)—for comparability with previous work (Blitzer et al., 2007). Each domain has 1k test instances, and we split this data into dev and test with ratio 4:6. The dev dataset is used for hyper-parameter tuning and early stopping,7 and we report accuracy results on test. 3.1.1 Baselines and Comparisons For comparison, we use 3 baselines. The first is a single channel CNN (“S-CNN”), which jointly over all data instances in a single model, without domain-specific parameters. The second baseline is a multi channel CNN (“M-CNN”), which expands the capacity of the S-CNN model (606k parameters) to match CSDA and DSDA (roughly 7.5m-8.3m parameters). Our third baseline is a multi-domain learning approach using adversarial learning for domain generation (“GEN”), the bestperforming model of Li et al. (2018a) and state-ofthe-art for unsupervised multi-domain adaptation over a comparable dataset.8 We report results for 6The dataset, along with the source code, can be found at https://github.com/lrank/Code_ VariationalInference-Multidomain 7This confers light supervision in the target domain. However we would expect similar results were we to use disjoint held out domains for development wrt testing. 8The dataset used in Li et al. (2018a) differs slightly in that it is also based off Multi-Domain Sentiment Dataset v2.0, their best performing GEN +d+g model. 3.1.2 Training Strategy For the hyper-parameter setups, we provide the details in Appendix A.1. In terms of training, we simulate two scenarios using two experimental configurations, as discussed above: (a) domain supervision; and (2) domain semi-supervision. For domain supervised training, only F is used, which covers only 9 of the domains, and the test domain data is entirely unseen. For domain semisupervised training, we use combinations of F and Y, noting that both sub-corpora do not include data from the target domains, and none of which is explicitly labelled with sentiment, y, and domain, d. These simulate the setting where we have heterogenous data which includes a lot of relevant data, however its metadata is inconsistent, and thus cannot be easily modelled. For λ in (5), according to the derivation of the ELBO it should be the case that λ = 1, however other settings are often justified in practice (Alemi et al., 2018). Accordingly, we tried both annealing and fixed schedules, but found no consistent differences in end performance. We performed a grid search for the fixed value, λ = 10a, a ∈ {−3, −2, −1, 0, 1}, and selected λ = 10−1, based on development performance. We provide further analysis in the form of a sensitivity plot in Section 3.2. The latent domain size k for DSDA is set to the true number of training domains k = D = 9. Note that, even for DSDA, we could use k ̸= D, which we explore in the F +Y supervision setting in Section 3.1.3. For CSDA we present the main results with k = 13, set to match the total number of domains in training and testing. 3.1.3 Results Table 2 reports the performance of different models under two training configurations: (1) with F +Y (domain semi-supervised learning); and (2) with F only (domain supervised learning). In each case, we report the standard deviation based on 10 runs with different random seeds. Overall, domain B and D are more difficult than E and K, consistent with previous work. Comparing the two configurations, we see that when we use domain semi-supervised training (with the addition of Y), all models perform betbut uses slightly more training domains and a slightly different composition of training data. We retrain the model of the authors over our dataset, using their implementation. 1928 Data B D E K Average F + Y S-CNN 78.9 ± 1.3 80.9 ± 1.5 82.4 ± 0.8 84.1 ± 1.8 81.6 ± 0.9 M-CNN 79.0 ± 1.5 82.5 ± 1.3 84.1 ± 0.8 85.9 ± 0.8 82.9 ± 0.9 GEN 78.4 ± 0.9 81.2 ± 1.0 83.9 ± 1.7 87.5 ± 1.2 82.8 ± 1.1 DSDA 76.8 ± 1.4 79.6 ± 1.7 83.1 ± 1.5 85.8 ± 2.0 81.3 ± 1.0 + semi-sup. 77.1 ± 1.6 79.9 ± 1.0 83.1 ± 1.7 85.4 ± 1.3 81.4 ± 0.6 CSDA w. Beta 78.4 ± 0.8 84.4 ± 0.7 82.9 ± 1.1 87.2 ± 1.3 83.2 ± 0.9 w. Dirichlet 80.0 ± 1.4 84.3 ± 1.4 86.2 ± 1.5 87.0 ± 0.3 84.4 ± 0.9 F only S-CNN 76.0 ± 1.8 77.0 ± 1.0 81.5 ± 1.3 82.8 ± 1.6 79.3 ± 0.7 M-CNN 76.7 ± 1.8 79.2 ± 0.4 82.0 ± 1.2 83.1 ± 1.8 79.8 ± 1.3 GEN 76.7 ± 2.0 79.1 ± 1.3 82.1 ± 1.6 84.0 ± 1.1 80.5 ± 0.7 DSDA 74.3 ± 1.4 75.8 ± 2.2 80.5 ± 1.3 82.8 ± 1.4 78.4 ± 0.9 + unsup. 74.1 ± 2.0 75.6 ± 2.3 80.8 ± 1.3 83.0 ± 1.7 78.4 ± 0.6 CSDA w. Beta 78.0 ± 1.9 80.5 ± 1.1 83.7 ± 1.3 85.7 ± 1.3 82.0 ± 1.1 w. Dirichlet 77.9 ± 1.6 80.6 ± 0.9 84.4 ± 1.1 86.5 ± 0.9 82.3 ± 0.6 IN DOMAIN ♣ 80.4 82.4 84.4 87.7 83.7 Table 2: Accuracy [%] and standard deviation of different models under two data configurations: (1) using both F and Y (domain semi-supervised learning); and (2) using F only (domain supervised learning). In each case, we evaluate over the four held-out test domains (B, D, E and K), and also report the accuracy. Best results are indicated in bold in each configuration. Key: ♣from Blitzer et al. (2007). ter, demonstrating the utility of domain semisupervised learning when annotated data is limited. Comparing our discrete and continuous approaches (DSDA and DSDA, resp.), we see that CSDA consistently performs the best, outperforming the baselines by a substantial margin. In contrast DSDA is disappointing, underperforming the baselines, and moreover, shows no change in performance between domain supervision versus the semi-supervised or unsupervised settings. Among the CSDA based methods, all the distributions perform well, but the Dirichlet distribution performs the best overall, which we attribute to better modelling of the sparsity of domains, thus reducing the influence of uncertain and mixed domains. The best results are for domain semi-supervised learning (F + Y), which brings an increase in accuracy of about 2% over domain supervised learning (F) consistently across the different types of model. 3.2 Analysis and Discussion To better understand what the model learns, we focus on the CSDA model, using the Dirichlet distribution. First, we consider the model capacity, in terms of the latent domain size, k. Figure 2 shows the impact of varying k. Note that the true number of domains is D = 13, comprising 9 training and 4 test domains. Setting k to roughly this value appears to be justified, in that the mean accuracy 2 4 8 16 32 64 76 78 80 82 84 k Acc Figure 2: Performance with standard error (|||) as latent domain size k is increased in log 2 space with DSDA ( ) and with three CSDA methods using Beta ( ) and Dirichlet ( ) averaged accuracy, over F + Y. increases with k, and plateaus around k = 16. Interestingly, when k ≥32, the performance of CSDA with Beta drops, while performance for Dirichlet remains high—indeed Dirichlet is consistently superior even at the extreme value of k = 2, although it does show improvement as k increases. Also observe that DSDA requires a large latent state inventory, supporting our argument for the efficiency of continuous cf. discrete latent variables. Next, we consider the impact of using different combinations of F and Y. Table 3 shows the performance of difference configurations. Overall, F + Y gives excellent performance. Interestingly, 1929 Domain B D E K apparel baby camera & photo health & personal care magazines music sports & outdoors toys & games video Sentiment negative positive Figure 3: t-SNE of hidden representations in CSDA over all 13 domains, comprising 4 held-out testing domains (B, D, E and K), and the remaindering 9 domains are used only for training. Each point is a document, and the symbol indicates its gold sentiment label, using a filled circle for negative instances and cross for positive. CSDA B D E K Average F 77.9 80.6 84.4 86.5 82.3 F + Y 80.0 84.3 86.2 87.0 84.4 Y 77.6 81.5 83.7 85.2 82.0 Table 3: Accuracy [%] of CSDA w. Dirichlet trained with different configurations of F and Y. Y on its own is only a little worse than only F, showing that target labels y are more important for learning than the domain d. The Y configuration fully domain unsupervised training still results in decent performance, boding well for application to very messy and heterogenous datasets with no domain metadata. Finally, we consider what is being learned by the model, in terms of how it learns to use the k dimensional latent variables for different types of data. We visualise the learned representations, showing points for each domain plotted in a 2d tSNE plot (Maaten and Hinton, 2008) in Figure 3. Notice that each domain is split into two clusters, representing positive (×××) and negative (•) instances within that domain. Among the test domains, B (books) and D (dvds) are clustered close together but are still clearly separated, which is encouraging given the close relation between these two media. The other two, E (electronics) and K (kitchen & housewares) are mixed together and intermingled with other domains. Overall across all domains, the APPAREL cluster is quite distinct, 0.001 0.01 0.1 1 10 20 40 60 80 100 λ Acc y d Figure 4: Diagostic classifier accuracy [%] over z to predict the sentiment label y and domain label d, with respect to different λ, shown on a log scale. Dashed horizontal lines show chance accuracy for both outputs. while VIDEO and MUSIC are highly associated with D, and part of the cluster for MAGAZINES is close to B; all of these make sense intuitively, given similarities between the respective products. E is related to CAMERA and GAMES, while K is most closely connected to HEALTH and SPORTS. To obtain a better understanding of what is being encoded in the latent variable, and how this is effected by the setting of λ, we learn simple diagnostic classifiers to predict sentiment label y and domain label d, given only z as input. To do so, we first train our model over the training set, and 1930 Data EUROGOV TCL WIKIPEDIA EMEA EUROPARL TBE TSC Average Y S-CNN 98.8 92.3 85.9 98.5 92.3 79.3 91.7 91.3 M-CNN 98.9 93.6 86.2 99.2 96.0 88.3 91.7 93.4 DSDA 98.3 91.9 86.3 97.8 95.2 86.0 79.0 90.6 CSDA w. Beta 98.7 93.0 89.0 99.3 96.8 93.1 95.2 95.0 w. Dirichlet 98.9 93.0 89.0 99.2 96.7 93.2 94.5 94.9 F DSDA 98.0 91.8 85.7 97.7 95.3 85.4 78.1 90.3 CSDA w. Beta 99.3 93.7 89.1 99.2 96.9 93.6 93.9 95.1 w. Dirichlet 99.0 93.7 89.3 99.3 96.9 93.3 96.1 95.4 GEN 99.9 93.1 88.7 92.5 97.1 91.2 96.1 94.1 LANGID.PY 98.7 90.4 91.3 93.4 97.4 94.1 92.7 94.0 Table 4: Accuracy [%] over 7 LangID benchmarks, as well as the averaged score, for different models under two data configurations: (1) using domain unsupervised learning (Y); and (2) using domain supervised learning (F). The best results are indicated in bold in each configuration. Note that the training data for GEN and LANGID.PY is slightly different from that used in the original papers. record samples of z from the inference network. We then partition the training set, using 70% to learn linear logistic regression classifiers to predict y and d, and use the remaining 30% for evaluation. Figure 4 shows the prediction accuracy, based on averaging over three runs, each with different z samples. Clearly very small λ ≤10−2, leads to almost perfect sentiment label accuracy which is evidence of overfitting by using the latent variable to encode the response variable. For λ ≥10−1 the sentiment accuracy is still above chance, as expected, but is more stable. For the domain label d, the predictive accuracy is also above chance, albeit to a lesser extent, and shows a similar downward trend. At the setting λ = 0.1, used in the earlier experiments, this shows that the latent variable encodes captures substantial sentiment, and some domain knowledge, as observed in Figure 3. In terms of the time required for training, a single epoch of training took about 25min for the CSDA method, using the default settings, and a similar time for DSDA and M-CNN. The runtime increases sub-linearly with increasing latent size k. 3.3 Language Identification To further demonstrate our approaches, we then evaluate our models with the second task, language identification (LangID: Jauhiainen et al. (2018)). For data processing, we use 5 training sets from 5 different domains with 97 language, following the setup of Lui and Baldwin (2011). We evaluate accuracy over 7 holdout benchmarks: EUROGOV, TCL, WIKIPEDIA from Baldwin and Lui (2010), EMEA (Tiedemann, 2009), EUROPARL (Koehn, 2005), TBE (Tromp and Pechenizkiy, 2011) and TSC (Carter et al., 2013). Differently from sentiment tasks, here, we evaluate our methods using the full dataset, but with two configurations: (1) domain unsupervised, where all instance have only labels but no domain (denoted Y); and (2) domain supervised learning, where all instances have labels and domain (F). 3.3.1 Results Table 4 shows the performance of different models over 7 holdout benchmarks and the averaged scores. We also report the results of GEN, the best model from Li et al. (2018a), and one state-of-theart off-the-shelf LangID tool: LANGID.PY (Lui and Baldwin, 2012). Note that, both S-CNN and M-CNN are domain unsupervised methods. In terms of results, overall, both of our CSDA models consistently outperform all other baseline models. Comparing the different CSDA variants, Beta vs. Dirichlet, both perform closely across the LangID tasks. Furthermore, CSDA out-performs the state-of-the-art in terms of average scores. Interestingly the two training configurations show that domain knowledge F provides a small performance boost for CSDA, but not does help for DSDA. Above all, the LangID results confirm the effectiveness of our proposed approaches. 4 Related Work Domain adaptation (“DA”) typically involves one or more training domains and a single target domain. Among DA approaches, single-domain adaptation is the most common scenario, where a model is trained over one domain and then transferred to a single target domain using prior 1931 knowledge of the target domain (Blitzer et al., 2007; Glorot et al., 2011). Adversarial learning methods have been proposed for learning robust domain-independent representations, which can capture domain knowledge through semisupervised learning (Ganin et al., 2016). Multi-domain adaptation uses training data from more than one training domain. Approaches include feature augmentation methods (Daum´e III, 2007), and analagous neural models (Joshi et al., 2012; Kim et al., 2016), as well as attentionbased and hierarchical methods (Li et al., 2018b). These works assume the ‘oracle’ source domain is known when transferring, however we do not require an oracle in this paper. Adversarial training methods have been employed to learn robust domain-generalised representations (Liu et al., 2016). Li et al. (2018a) considered the case of the model having no access to the target domain, and using adversarial learning to generate domaingeneration representations by cross-comparison between source domains. The other important component of this work is Variational Inference (“VI”), a method from machine learning that approximates probability densities through optimisation (Blei et al., 2017; Kucukelbir et al., 2017). The idea of a variational auto-encoder has been applied to language generation (Bowman et al., 2016; Kim et al., 2018; Miao et al., 2017; Zhou and Neubig, 2017; Zhang et al., 2016) and machine translation (Shah and Barber, 2018; Eikema and Aziz, 2018), but not in the context of semi-supervised domain adaptation. 5 Conclusion In this paper, we have proposed two models— DSDA and CSDA—for multi-domain learning, which use a graphical model with a latent variable to represent the domain. We propose models with a discrete latent variable, and a continuous vectorvalued latent variable, which we model with Beta or Dirichlet priors. For training, we adopt a variational inference technique based on the variational autoencoder. In empirical evaluation over a multi-domain sentiment dataset and seven language identification benchmarks, our models outperform strong baselines, across varying data conditions, including a setting where no target domain data is provided. Our proposed models have broad utility across NLP applications on heterogenous corpora. Acknowledgements This work was supported by an Amazon Research Award. We thank the anonymous reviewers for their helpful feedback and suggestions. References Alexander Alemi, Ben Poole, Ian Fischer, Joshua Dillon, Rif A Saurous, and Kevin Murphy. 2018. Fixing a broken elbo. In International Conference on Machine Learning, pages 159–168. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations. Timothy Baldwin and Marco Lui. 2010. Language identification: The long and the short of the matter. In Proceedings of Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, pages 229–237. David M Blei, Alp Kucukelbir, and Jon D McAuliffe. 2017. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3(Jan):993–1022. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 440–447. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21. Simon Carter, Wouter Weerkamp, and Manos Tsagkias. 2013. Microblog language identification: Overcoming the limitations of short, unedited and idiomatic text. Language Resources and Evaluation, 47(1):195–215. Djork-Arn´e Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2016. Fast and accurate deep network learning by exponential linear units (ELUs). In Proceedings of the International Conference on Learning Representations. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 256–263. 1932 Bryan Eikema and Wilker Aziz. 2018. Auto-encoding variational neural machine translation. arXiv preprint arXiv:1807.10564. Mikhail Figurnov, Shakir Mohamed, and Andriy Mnih. 2018. Implicit reparameterization gradients. In Advances in Neural Information Processing Systems 31, pages 439–450. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17:59:1–59:35. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th International Conference on Machine Learning, pages 513–520. Peter W Glynn and Donald L Iglehart. 1989. Importance sampling for stochastic simulations. Management Science, 35(11):1367–1392. Tommi Jauhiainen, Marco Lui, Marcos Zampieri, Timothy Baldwin, and Krister Lind´en. 2018. Automatic language identification in texts: A survey. CoRR, abs/1804.08186. Mahesh Joshi, Mark Dredze, William W. Cohen, and Carolyn Penstein Ros´e. 2012. Multi-domain learning: When do domains matter? In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1302–1312. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1746–1751. Yoon Kim, Sam Wiseman, Andrew Miller, David Sontag, and Alexander Rush. 2018. Semi-amortized variational autoencoders. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2678–2687. Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2016. Frustratingly easy neural domain adaptation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 387–396. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations. Diederik P. Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. 2014. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, pages 3581–3589. Diederik P Kingma and Max Welling. 2014. Autoencoding variational Bayes. In Proceedings of the International Conference on Learning Representations. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT Summit 2005, pages 79–86. Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M Blei. 2017. Automatic differentiation variational inference. Journal of Machine Learning Research, 18(1):430–474. Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018a. What’s in a domain? learning domain-robust text representations using adversarial training. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 474–479. Zheng Li, Ying Wei, Yu Zhang, and Qiang Yang. 2018b. Hierarchical attention transfer network for cross-domain sentiment classification. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Deep multi-task learning with shared memory for text classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 118–127. Marco Lui and Timothy Baldwin. 2011. Cross-domain feature selection for language identification. In Fifth International Joint Conference on Natural Language Processing, pages 553–561. Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In Proceedings of ACL 2012 System Demonstrations, pages 25–30. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(Nov):2579–2605. Yishu Miao, Edward Grefenstette, and Phil Blunsom. 2017. Discovering discrete latent topics with neural variational inference. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 2410–2419. Harshil Shah and David Barber. 2018. Generative neural machine translation. In Advances in Neural Information Processing Systems, pages 1346–1355. J¨org Tiedemann. 2009. News from OPUS – a collection of multilingual parallel corpora with tools and interfaces. In Recent Advances in Natural Language Processing, volume 5, pages 237–248. Erik Tromp and Mykola Pechenizkiy. 2011. Graphbased n-gram language identification on short texts. 1933 In Proceedings of the 20th Machine Learning Conference of Belgium and The Netherlands, pages 27– 34. Biao Zhang, Deyi Xiong, Jinsong Su, Hong Duan, and Min Zhang. 2016. Variational neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 521–530. Chunting Zhou and Graham Neubig. 2017. Multispace variational encoder-decoders for semisupervised labeled sequence transduction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 310–320. 1934 A Appendices A.1 Base Model Architecture For the sentiment task, all the hidden representations are learned by convolutional neural networks (CNN), following Kim (2014). All documents are lower-cased and truncated to maximum 256 tokens, and then each word is mapped into a 300 dimensional vector representation using randomlyinitialised word embeddings. In each CNN channel, filter windows are set to {3, 4, 5}, with 128 filters for each. Then, ReLU and pooling are applied after the filtering, generating 384-d (128 ∗3) hidden representations. Dropout is applied to the hidden h, at a rate of 0.5. For simplicity, we use the same CNN architecture to encode the functions f used in the prior q and in the inference networks p, in each case with different parameters. Specifically, in prior q, the embedding sizes of domain and label are set to 16 and 4, respectively. ααα and βββ share the same CNN but with different output projections. After gating using z, the final hidden goes through a one-hidden MLPwith hidden size 300. We use the Adam optimiser (Kingma and Ba, 2015) throughout, with the learning rate set to 10−4 and a batch size of 32, optimising the loss functions (1) or (5), for DSDA and CSDA, respectively. For the language identification task, all documents are tokenized as a byte sequence, truncated or padded to a length of 1k bytes. We use the same CNN architecture and hyper-parameter configurations as for the sentiment task. A.2 Implicit Reparameterisation Gradient In this section, we outline the implicit reparameterisation gradient method of Figurnov et al. (2018). First, we review some background on variational inference. We start by defining a differentiable and invertible standardization function as Sσ(z) = ϵ ∼q(ϵ) , (6a) which describes a mapping between points drawn from a specific distribution function and a standard distribution, q. For example, for a Gaussian distribution z ∼N(µ, ψ), we can define Sµ,ψ(z) = (z −µ)/ψ ∼N(0, 1) to map to the standard Normal. We aim to compute the gradient of the expectation of a objective function f(z), ∇σ E qσ(z) [f(z)] = E q(ϵ) [∇σf(S−1(ϵ))] , (6b) where in ELBO (5) in our case, f(z) = pθ(y|z, x) is the likelihood function. The implicit reparameterisation gradient technique is a way of computing the reparameterisation without the need for inversion of the standardization function. This works by applying ∇σS−1(ϵ) = ∇σz, ∇σ E qσ(z) [f(z)] = E qσ(z) [∇zf(z)∇σz] . (6c) However, we still need to calculate ∇σz. The key insight here is that we can compute ∇σz by implicit differentiation. We apply the total gradient ∇TD σ over (6a), ∇TD σ Sσ(z) = ∇TD σ ϵ . (6d) From the definition of a standardization function, the noise ϵ is independent of σ, and we apply the multi-variable chain rule over left side of (6d), ∂Sσ(z) ∂z ∇σz + ∂Sσ(z) ∂σ = 0 . (6e) Therefore, the key of the implicit gradient calculation in this process can be summarised as ∇σz = −(∇zSσ(z))−1∇σSσ(z) . (6f) This expression allows for computation of (6c), which can be applied to a range of distribution families. We refer the reader to Figurnov et al. (2018) for further details.
2019
186
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1935–1945 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1935 Boosting Entity Linking Performance by Leveraging Unlabeled Documents Phong Le1 and Ivan Titov1,2 1University of Edinburgh 2University of Amsterdam [email protected] [email protected] Abstract Modern entity linking systems rely on large collections of documents specifically annotated for the task (e.g., AIDA CoNLL). In contrast, we propose an approach which exploits only naturally occurring information: unlabeled documents and Wikipedia. Our approach consists of two stages. First, we construct a high recall list of candidate entities for each mention in an unlabeled document. Second, we use the candidate lists as weak supervision to constrain our document-level entity linking model. The model treats entities as latent variables and, when estimated on a collection of unlabelled texts, learns to choose entities relying both on local context of each mention and on coherence with other entities in the document. The resulting approach rivals fully-supervised state-of-the-art systems on standard test sets. It also approaches their performance in the very challenging setting: when tested on a test set sampled from the data used to estimate the supervised systems. By comparing to Wikipedia-only training of our model, we demonstrate that modeling unlabeled documents is beneficial. 1 Introduction Named entity linking is the task of linking a mention to the corresponding entity in a knowledge base (e.g., Wikipedia). For instance, in Figure 1 we link mention “Trump” to Wikipedia entity Donald Trump. Entity linking enables aggregation of information across multiple mentions of the same entity which is crucial in many natural language processing applications such as question answering (Hoffmann et al., 2011; Welbl et al., 2018), information extraction (Hoffmann et al., 2011) or multi-document summarization (Nenkova, 2008). While traditionally entity linkers relied mostly on Wikipedia and heuristics (Milne and Witten, Mr. Trump discussed Brexit with Mrs. May . Donald_Trump (*) Donald_Trump_Jr. Melania_Trump Ivanka_Trump Trump_(card_games) Trump_(surname) Trump_(video_gamer) Trump_(magazine) Trump,_Colorado ... Brexit(*) May_(singer) May_(surname) Theresa_May (*) Mary_of_Teck Abby_May Cyril_May Fiona_May May_(film) May,_California ... Figure 1: A sentence with candidate entities for mentions. The correct entities are marked with (*). We automatically extract likely candidates (red bold) and likely negative examples (non-bold red). These are used to train our weakly-supervised model. 2008; Ratinov et al., 2011a; Cheng and Roth, 2013), the recent generation of methods (Globerson et al., 2016; Guo and Barbosa, 2016; Yamada et al., 2016; Ganea and Hofmann, 2017; Le and Titov, 2018) approached the task as supervised learning on a collection of documents specifically annotated for the entity linking problem (e.g., relying on AIDA CoNLL (Hoffart et al., 2011)). While they substantially outperform the traditional methods, such human-annotated resources are scarce (e.g., available mostly for English) and expensive to create. Moreover, the resulting models end up being domain-specific: their performance drops substantially when they are used in a new domain.1 We will refer to these systems as fully-supervised. Our goal is to show that an accurate entity linker can be created relying solely on naturally occurring data. Specifically, our approach relies only on Wikipedia and a collection of unlabeled texts. Though links in Wikipedia have been created by humans, no extra annotation is necessary to build our linker. Wikipedia is also available in many 1The best reported in-domain scores are 93.1% F1 (Le and Titov, 2018), whereas the best previous out-of-domain score is only 85.7% F1 (Guo and Barbosa, 2016) (an average over 5 standard out-of-domain test sets, Table 1). 1936 languages and covers many domains. Though Wikipedia information is often used within entity linking pipelines, previous systems relying on Wikipedia are substantially less accurate than modern fully-supervised systems (e.g., Cheng and Roth (2013), Ratinov at al. (2011a)). This is also true of the only other method which, like ours, uses a combination of Wikipedia data and unlabeled texts (Lazic et al., 2015). We will refer to approaches using this form of supervision, including our approach, as Wikipedia-based linkers. Wikipedia articles have a specific rigid structure (Chen et al., 2009), often dictated by the corresponding templates, and mentions in them are only linked once (when first mentioned). For these reasons, Wikipedia pages were not regarded as suitable for training document-level models (Globerson et al., 2016; Ganea and Hofmann, 2017), whereas state-of-the-art fully supervised methods rely on document-level modeling. We will show that, by exploiting unlabeled documents and estimating document-level neural coherence models on these documents, we can bring Wikipedia-based linkers on par or, in certain cases, make them more accurate than fully-supervised linkers. Our Wikipedia-based approach uses two stages: candidate generation and document-level disambiguation. First, we take an unlabeled document collection and use link statistics in Wikipedia to construct a high recall list of candidates for each mention in each document. To create these lists, we use the Wikipedia link graph, restrict vertices to the ones potentially appearing in the document (i.e. use the ‘vertex-induced subgraph’ corresponding to the document) and perform message passing with a simple probabilistic model which does not have any trainable parameters. After this step, for the example in Figure 1, we would be left with Theresa May and a Queen of England Mary of Teck as two potential candidates for mention “May,” whereas we would rule out many other possibilities (e.g., a former settlement in California). Second, we train a document-level statistical disambiguation model which treats entities as latent variables and uses the candidate lists as weak supervision. Intuitively, the disambiguation model is trained to score at least one assignment compatible with the candidate lists higher than all the assignments incompatible with the lists (e.g., one which links “Trump” to Ivanka Trump). Though the constraints do not prevent linking “May” to the Queen in Figure 1, given enough data, the model should rule out this assignment as not in fitting with other entities in the document (i.e. Donald Trump and Brexit) and/or not compatible with its local context (i.e. “Mrs.”). We evaluate our model against previous methods on six standard test sets, covering multiple domains. Our model achieves the best results on four of these sets and in average. Interestingly, our system performs well on test data from AIDA CoNLL, the dataset used to train fully-supervised systems, even though we have not used the annotations. Our approach also substantially outperforms both previous Wikipedia-based approaches and a version of our system which is simply trained to predict Wikipedia links. This result demonstrates that unlabeled data was genuinely beneficial. We perform ablations confirming that the disambiguation model benefits from capturing both coherence with other entities (e.g., Theresa May is more likely than Mary of Teck to appear in a document mentioning Donald Trump) and from exploiting local context of mentions (e.g., “Mrs.” can be used to address a prime minister but not a queen). This experiment confirms an intuition that global modeling of unlabeled documents is preferable to training local models to predict individual Wikipedia links. Our contributions can be summarized as follows: • we show how Wikipedia and unlabeled data can be used to construct an accurate linker which rivals linkers constructed using expensive human supervision; • we introduce a novel constraint-driven approach to learning a document-level (‘global’) co-reference model without using any document-level annotation; • we provide evidence that fully-annotated documents may not be as beneficial as previously believed. 2 Constraint-Driven Learning for Linking 2.1 Setting We assume that for each mention mi, we are provided with a set of candidates E+ i . In subsequent section we will clarify how these 1937 candidates are produced. For example, for m1 =“Trump” in Figure 1, the set would be E+ 1 = {Donald Trump, Melania Trump}. When learning our model we will assume that one entity candidate in this set is correct (e∗ i ). Besides the ‘positive examples’ E+ i , we assume that we are given a set of wrong entities E− i (including, in our example, Ivanka Trump and Donald Trump Jr). In practice our candidate selection procedure is not perfect and the correct entity e∗ i will occasionally be missed from E+ i and even misplaced into E− i . This is different from the standard supervised setting where E+ i contains a single entity, and the annotation is not noisy. Moreover, unlike the supervised scenario, we do not aim to learn to mimic the teacher but rather want to improve on it relying on other learning signals (i.e. document context). Some mentions do not refer to any entity in a knowledge base and should, in principle, be left unlinked. In this work, we link mentions whenever there are any candidates for linking them. More sophisticated ways of dealing with NIL-linking are left for future work. 2.2 Model Our goal is to not only model fit between an entity and its local context but also model interactions between entities in a document (i.e. coherence between them). As in previous global entity-linking models (Ratinov et al., 2011a), we can define the scoring function for n entities e1, . . . , en in a document D as a conditional random field: g(e1, . . . , en|D) = n X i=1 φ(ei|D)+ X j̸=i ψ(ei, ej|D), where the first term scores how well an entity fits the context and the second one judges coherence. Exact MAP (or max marginal) inference, needed both at training and testing time, is NP-hard (Wainwright et al., 2008), and even approximate methods (e.g., loopy belief propagation, LBP) are relatively expensive and do not provide convergence guarantees. Instead, we score entities independently relying on the candidate lists: s(ei|D) = φ(ei|D)+ X j̸=i max ej∈E+ j ψ(ei, ej|D). (1) Informally, we score ei based on its coherence with the ‘most compatible’ candidate for each mention in the document. This scoring strategy Mr. Trump discussed Brexit with Mrs. May tanh, dropout Figure 2: h(mi, ci) is a one-layer neural network, with tanh activation and a layer of dropout on top. is computationally efficient and has been shown effective in the supervised setting by Globerson et al. (2016). They refereed to this approach as a ‘star model’, as it can be regarded as exact inference in a modified graphical model.2 We instantiate the general model for the above expression (1) in the following form: s(ei|D) = φ(ei|ci, mi) + X j̸=i αij max ej∈E+ j ξ(ei, ej), where we use mi to denote an entity mention, ci is its context (a text window around the mention), ξ(ei, ej) is a pair-wise compatibility score and αij are attention weights, measuring relevance of an entity at position j to predicting entity ei (i.e. Pn j=1 αij = 1). The local score φ is identical to the one used in Ganea and Hofmann (2017). As the pair-wise compatibility score we use ξ(ei, ej) = xT eiRxej, where xei and xej ∈ Rde are external entity embeddings, which are not fine-tuned in training. R ∈Rde×de is a diagonal matrix. The attention is computed as αij ∝exp n h(mi, ci)T Ah(mj, cj)/ p dc o where the function h(mi, ci) mapping a mention and its context to Rdc is given in Figure 2, A ∈ Rdc×dc is a diagonal matrix. A similar attention model was used in the supervised linkers of Le and Titov (2018) and Globerson et al. (2016). Previous supervised methods such as Ganea and Hofmann (2017) additionally exploited a simple extra feature pwiki(ei|mi): the normalized frequency of mention mi being used as an anchor text for entity ei in Wikipedia articles and YAGO. We combine this score with the model score s(ei|D) using a one-layer neural network to yield ˆs(ei|D). At test time, we use our model to select entities from the candidate list. As standard in reranking (Collins and Koo, 2005), we linearly combine 2For each ei, you create its own graphical model: keep only edges connecting ei to all other entities; what you obtain is a star-shaped graph with ei at its center. 1938 ˆs(ei|D) with the score sc(ei|D) from the candidate generator, defined below (Section 3.3).3 The hyper-parameters are chosen using a development set. Additional details are provided in the appendix. 2.3 Training As we do not know which candidate in E+ i is correct, we train the model to score at least one candidate in E+ i higher than any negative example from E− i . This approach is reminiscent of constraintdriven learning (Chang et al., 2007), as well as of multi-instance learning methods common in relation extraction (Riedel et al., 2010; Surdeanu et al., 2012). Specifically, we minimize L(Θ) = X D X mi h δ + max e− i ∈E− i ˆs(e− i |D) −max e+ i ∈E+ i ˆs(e+ i |D) i + where Θ is the set of model parameters, δ is a margin, and [x]+ = max{0, x}. 3 Producing Weak Supervision We rely primarily on Wikipedia to produce weak supervision. We start with a set of candidates for a mention m containing all entities refereed to with anchor text m in Wikipedia. We then filter this set in two steps. The first step is the preprocessing technique of Ganea and Hofmann (2017). After this step, the list has to remain fairly large in order to maintain high recall. Large lists are not effective as weak supervision as they do not sufficiently constraint the space of potential assignments to drive learning of the entity disambiguation model. In order to further reduce the list, we apply the second filtering step. In this stage, which we introduce in this work, we use Wikipedia to create a link graph: entities as vertices in this graph. The graph defines the structure of a probabilistic graphical model which we use to rerank the candidate list. We select only top candidates for each mention (2 in our experiments) and still maintain high recall. The two steps are described below. 3.1 Initial filtering For completeness, we re-describe the filtering technique of Ganea and Hofmann (2017). The 3We do not train the linear coefficient in an end-to-end fashion, as we do not want our model to over-rely on the candidate selection procedure at training time. Brexit United_ Kingdom European_ Union Theresa_May Greek_withdrawal_ from_the_eurozone Brexit Brexit is the prospective withdrawal of the United Kingdom (UK) from the European Union (EU). ... Prime Minister Theresa May announced that the UK would not seek permanent membership of the single market ... … Brexit is a portmanteau of "British" and "exit". It was derived by analogy from Grexit. ... Figure 3: A Wikipedia article and the corresponding subgraph of the Wikipedia link graph. initial list of candidates is large (see Ganea and Hofmann (2016), Table 1 for statistics), though there are some mentions (e.g., “Brexit” in Figure 1) which are not ambiguous. In order to filter this list, besides pwiki(e|m), Ganea and Hofmann (2017) use a simple model measuring similarity in the embedding space between an entity and words within the mention span m and a window c around it qwiki(e|m, c) ∝exp{xT e X w∈(m,c) xw}, xe and xw ∈Rde are external embeddings for entity e and word w, respectively. Note that the word and entity embeddings are not fine-tuned, so the model does not have any free parameters. They then extract Np = 4 top candidates according to pwiki(e|m) and Nq = 3 top candidates according to qwiki(e|m, c) to get the candidate list. For details, we refer to the original paper. On the development set, this step yields recall of 97.2%. 3.2 Message passing on link graph We describe now how we use Wikipedia link statistics to further reduce the candidate list. 3.2.1 Link graph We construct an undirected graph from Wikipedia; vertices of this graph are Wikipedia entities. We link vertex eu with vertex ev if there is a document Dwiki in Wikipedia such that either • Dwiki is a Wikipedia article describing eu, and ev appears in it, or • Dwiki contains eu, ev and there are less than l entities between them. For instance, in Figure 3, for document “Brexit”, we link entity Brexit to all other entities. However, we do not link United Kingdom to Greek withdrawal from the eurozone as they are more than l entities apart. 1939 83.51 93.93 95.85 96.52 96.85 97.03 97.22 Number of kept candidates Recall (%) 80 85 90 95 100 1 2 3 4 5 6 7 Figure 4: Recall as a function of the candidate number. 3.2.2 Model and inference Now we consider unlabeled (non-Wikipedia) documents. We use this step both to preprocess training documents and also apply it to new unlabeled documents at test time. First, we produce at most Nq + Np candidates for each mention in a document D as described above.4 Then we define a probabilistic model over entities in D: rwiki(e1, . . . , en|D) ∝exp{ X i̸=j ϕwiki(ei, ej)}, where ϕwiki(ei, ej) is 0 if ei is linked with ej in the link graph and −∆, otherwise (∆∈R+). Intuitively, the model scores an assignment e1, . . . , en according to the number of unlinked pairs in the assignment. We use max-product version of LBP to produce approximate marginals: rwiki(ei|D) ≈ max e1,...,ei−1 ei+1,...,en rwiki(e1, . . . , en|D) For example, in Figure 1, we linked Donald Trump to Brexit and with Theresa May, that are linked in the Wikipedia link graph. The assignment Donald Trump, Brexit, Theresa May does not contain unlinked pairs and will receive the highest score. In Figure 4, we plot recall on AIDA CoNLL development set as a function of the candidate number (ranking is according to rwiki(ei|D)). We can see that we can reduce Np + Nq = 7 candidates down to Nw = 2 and still maintain recall of 93.9%.5 The remaining (Np + Nq −Nw) entities are kept as ‘negative examples’ E− i for training the disambiguation model (see Figure 1). 4Less for entities which are not ambiguous enough. 5To break ties, we chose a mention which is ranked higher in the first step. 3.3 Aggregate scoring function As we can see from Figure 4, keeping the top candidate from the list would yield recall of 83.5%, which is about 10% below state of the art. In order to test how far we can go without using the disambiguation model, we combine together the signals we relied on in the previous section. Specifically, rather than using rwiki alone, we linearly combine the Levenstein edit distance (Levenshtein, 1966), with the scores pwiki and rwiki. Parameters are described in the appendix. The coefficients are chosen on the development set. We refer to this score as sc(ei|D). 4 Experiments 4.1 Parameters and Resources We used DeepEd6 from Ganea and Hofmann (2017) to obtain entity embeddings. We also used Word2vec word embeddings7 to compute the local score function and GloVe embeddings8 within the attention model in Figure 2. Hyper-parameter selection was performed on the AIDA CoNLL development set. The margin parameters δ and the learning rate were set to 0.1 and 10−4. We use early stopping by halting training when F1 score on the development set does not increase after 50,000 updates. We report the mean and 95% confidence of the F1 scores using five runs of our system. See additional details in the appendix. The source code and data are publicly available at https://github.com/lephong/wnel. 4.2 Setting We carried out our experiments in the standard setting but used other (unlabeled) data for training, as described below. We used six test sets: AIDA CoNLL ‘testb’ (Hoffart et al., 2011) (aka AIDAB); MSNBC, AQUAINT, ACE2004, cleaned and updated by Guo and Barbosa (2016); CWEB, WIKI, automatically extracted from Clueweb (Guo and Barbosa, 2016; Gabrilovich et al., 2013). We use AIDA CoNLL ‘testa’ data (aka AIDA-A) as our development set (216 documents). In our experiments, we randomly selected 30,000 unlabeled documents from RCV1. Since we focus on the inductive setting, we do not include any documents used to create AIDA CoNLL 6github.com/dalab/deep-ed 7code.google.com/archive/p/word2vec/ 8nlp.stanford.edu/projects/glove/ 1940 development and test sets in our training set. In addition, we did not use any articles appearing in WIKI to compute rwiki. We rely on SpaCy9 to extract named entity mentions. We compare our model to those systems which were trained on Wikipedia or on Wikipedia plus unlabeled documents. They are: Milne and Witten (2008), Ratinov et al. (2011a), Hoffart et al. (2011), Cheng and Roth (2013), Chisholm and Hachey (2015), Lazic et al. (2015). Note that we are aware of only Lazic et al. (2015) which relied on learning from a combination of Wikipedia and unlabeled documents. They use semi-supervised learning and exploit only local context (i.e. coherence with other entities is not modeled). We also compare to recent state-of-the-art systems trained supervisedly on Wikipedia and extra supervision or on AIDA CoNLL: Chisholm and Hachey (2015), Guo and Barbosa (2016), Globerson et al. (2016), Yamada et al. (2016), Ganea and Hofmann (2017), Le and Titov (2018). Chisholm and Hachey (2015) used supervision in the form of links to Wikipedia from non-Wikipedia pages, Wikilinks (Singh et al., 2012)). This annotation can also be regarded as weak or incidental supervision, as it was not created with the entity linking problem in mind. The others exploited AIDA CoNLL training set. F1 scores of these systems are taken from Guo and Barbosa (2016), Ganea and Hofmann (2017) and Le and Titov (2018). We use the standard metric: ‘in-knowledgebase’ micro F-score, in other words, F1 of those mentions which can be linked to the knowledge base. We report the mean and 95% confidence of the F1 scores using five runs of our system. 4.3 Results The results are shown in Table 1. First, we compare to systems which relied on Wikipedia and those which used Wikipedia along with unlabeled data (‘Wikipedia + unlab’), i.e. the top half of Table 1. These methods are comparable to ours, as they use the same type of information as supervision. Our model outperformed all of them on all test sets. One may hypothesize that this is only due to using more powerful feature representations rather than our estimation method or document-level disambiguation. We will address this hypothesis in the ablation studies below. The approach of Chrisholm and Hachey (2015) does 9https://spacy.io/ not quite fall in this category as, besides information from Wikipedia, they use a large collection of web pages (34 million web links). When evaluated on AIDA-B, their scores are still lower than ours, though significantly higher that those of the previous systems suggesting that web links are indeed valuable. Though we do not exploit web links in our model, in principle, they can be used in the exactly same way as Wikipedia links. We leave it for future work. Second, we compare to fully-supervised systems, which were estimated on AIDA-CoNLL documents. Recall that every mention in these documents has been manually annotated or validated by a human expert. We distinguish results on a test set taken from AIDA-CoNLL (AIDA-B) and the other standard test sets not directly corresponding to the AIDA-CoNLL domain. When tested on the latter, our approach is very effective, on average outperforming fully-supervised techniques. We would argue that this is the most important set-up and fair to our approach: it is not feasible to obtain labels for every domain of interest and hence, in practice, supervised systems are rarely (if ever) used in-domain. As expected, on the in-domain test set (AIDA-B), the majority of recent fully-supervised methods are more accurate than our model. However, even on this test set our model is not as far behind, for example, outperforming the system of Guo and Barbosa (2016). 4.4 Analysis and ablations We perform ablations to see contributions of individual modeling decisions, as well as to assess importance of using unlabeled data. Is constraint-driven learning effective? In this work we advocated for learning our model on unlabeled non-Wikipedia documents and using Wikipedia to constraint the space of potential entity assignments. A simpler alternative would be to learn to directly predict links within Wikipedia documents and ignore unlabeled documents. Still, in order to show that our learning approach and using unlabeled documents is indeed preferable, we estimate our model on Wikipedia articles. Instead of using the candidate selection step to generate list E+ i , we used the gold entity as singleton E+ i in training. The results are shown in Table 2 (‘Wikipedia’). The resulting model is significantly less accurate than the one which used unlabeled documents. The score difference is larger 1941 Methods AIDA-B MSNBC AQUAINT ACE2004 CWEB WIKI Avg Wikipedia (Milne and Witten, 2008) 78 85 81 64.1 81.7 77.96 (Ratinov et al., 2011a) 75 83 82 56.2 67.2 72.68 (Hoffart et al., 2011) 79 56 80 58.6 63 67.32 (Cheng and Roth, 2013) 90 90 86 67.5 73.4 81.38 (Chisholm and Hachey, 2015) 84.9 Wiki + unlab (Lazic et al., 2015) 86.4 Our model 89.66 ±0.16 92.2 ±0.2 90.7 ±0.2 88.1 ±0.0 78.2 ±0.2 81.7 ±0.1 86.18 Wiki + Extra supervision (Chisholm and Hachey, 2015) 88.7 Fully-supervised (Wiki + AIDA CoNLL train) (Guo and Barbosa, 2016) 89.0 92 87 88 77 84.5 85.7 (Globerson et al., 2016) 91.0 (Yamada et al., 2016) 91.5 (Ganea and Hofmann, 2017) 92.22 ±0.14 93.7 ±0.1 88.5 ±0.4 88.5 ±0.3 77.9 ±0.1 77.5 ±0.1 85.22 (Le and Titov, 2018) 93.07 ±0.27 93.9 ±0.2 88.3 ±0.6 89.9 ±0.8 77.5 ±0.1 78.0 ±0.1 85.5 Table 1: F1 scores on six test sets. The last column, Avg, shows the average of F1 scores on MSNBC, AQUAINT, ACE2004, CWEB, and WIKI. Our model AIDA-A AIDA-B Avg weakly-supervised 88.05 89.66 86.18 fully-supervised on Wikipedia 87.23 87.83 85.84 on AIDA CoNLL 91.34 91.87 84.55 Table 2: F1 scores of our model when it is weakly-supervised and when it is fully-supervised on Wikipedia and on AIDA CoNLL. AIDA-A is our development set. Avg is the average of F1 scores on MSNBC, AQUAINT, ACE2004, CWEB, and WIKI. Each F1 is the mean of five runs. Model AIDA-A Our model 88.05 without local 82.41 without attention 86.82 No disambiguation model (sc) 86.42 Table 3: Ablation study on AIDA CoNLL development set. Each F1 score is the mean of five runs. for AIDA-CoNLL test set than for the other 5 test sets. This is not surprising as our unlabeled documents originate from the same domain as AIDACoNLL. This suggests that the scores on the 5 tests could in principle be further improved by incorporating unlabeled documents from the corresponding domains. Additionally we train our model on AIDA-CoNLL, producing its fully-supervised version (‘AIDA CoNLL’ row in Table 2). Though, as expected, this version is more accurate on AIDA test set, similarly to other fully-supervised methods, it overfits and does not perform that well on the 5 out-of-domain test sets. As we do not want to test multiple systems on the final test set, we report the remaining ablations on the development set (AIDA-A), Table 3.10 Is the document-level disambiguation model beneficial? As described in Section 3.3 (‘Aggregate scoring function’), we constructed a baseline which only relies on link statistics in Wikipedia as well as string similarity (we refereed to its scoring function as sc). It appears surprisingly strong, however, we still outperform it by 1.6% (see Table 3). Is both local and global disambiguation beneficial? When we use only global coherence (i.e. only second term in expression (1)) and drop any modeling of local context on the disambiguation stage, the performance drops very substantially (to 82.4% F1, see Table 3). This suggests that the local scores are crucial in our model: an entity should fit its context (e.g., in our running example, ‘Mrs’ is not used to address a Queen). Without using local scores the disambiguation model appears to be even less accurate than our ‘no-statisticaldisambiguation’ baseline. It is also important to have an accurate global model: not using global attention results in a 1.2% drop in performance. Do we need many unlabeled documents? Figure 5 shows how the F1 score changes when we use different numbers of unlabeled documents for 10The AIDA CoNLL development set appears harder than the test set, as the numbers of all systems tend to be lower (Ganea and Hofmann, 2017; Le and Titov, 2018). 1942 73.90 84.50 85.41 86.83 87.91 88.05 # of raw docs F1 (%) 70 75 80 85 1 10 100 1000 10000 Figure 5: F1 on AIDA-A vs. number of unlabeled documents. Type Our model Fully-supervised learning on AIDA CoNLL LOC 85.53 89.41 MISC 75.71 83.27 ORG 89.51 92.70 PER 97.20 97.73 Table 4: Accuracy (%) by NER type on AIDA-A. training. As expected, the score increases with the number of raw documents, but changes very slowly after 10,000 documents. Which entities are easier to link? Figure 4 shows the accuracy of two systems for different NER (named entity recognition) types. We consider four types: location (LOC), organization (ORG), person (PER), and miscellany (MICS). These types are given in CoNLL 2003 dataset, which was used as a basis for AIDA CoNLL.11 Our model is accurate for PER, achieving accuracy of about 97%, only 0.53% lower than the supervised model. However, annotated data appears beneficial for other named-entity types. One of the harder cases for our model is distinguishing nationalities from languages (e.g., “English peacemaker” vs “English is spoken in the UK”). Both linking options typically appear in the positive sets simultaneously, so the learning objective does not encourage the model to distinguish the two. This is one of most frequent mistakes for tag ‘MISC’. 5 Related work Using Wikipedia pages to learn linkers (‘wikifiers’) has been a popular line of research both for named entity linking (Cheng and Roth, 2013; Milne and Witten, 2008) and generally entity disambiguation tasks (Ratinov et al., 2011b). How11Note that we do not use NER types in our system. ever, since introduction of the AIDA CoNLL dataset, fully-supervised learning on this dataset became standard for named entity linking, with supervised systems (Globerson et al., 2016; Guo and Barbosa, 2016; Yamada et al., 2016) outperforming alternatives even on out-of-domain datasets such as MSNBC and ACE2004. Note though that supervised systems also rely on Wikipedia-derived features. As an alternative to using Wikipedia pages, links to Wikipedia pages from the general Web were used as supervision (Singh et al., 2012). As far as we are aware, the system of Chisholm and Hachey (2015) is the only such system evaluated on standard named-entity linking benchmarks, and we compare to them in our experiments. This line of work is potentially complementary to what we propose, as we could use the Web links to construct weak supervision. The weakly- or semi-supervised set-up, which we use, is not common for entity linking. The only other approach which uses a combination of Wikipedia and unlabeled data, as far as we are aware of, is by Lazic et al. (2015). We discussed it and compared to in previous sections. Our setup is inspired by distantly-supervised learning in relation extraction (Mintz et al., 2009). In distant learning, the annotation is automatically (and noisily) induced relying on a knowledge base instead of annotating the data by hand. Fan, Zhou, and Zheng (2015) learned a Freebase linker using distance supervision. Their evaluation is nonstandard. They also do not attempt to learn a disambiguation model but directly train their system to replicate noisy projected annotations. Wang et al. (2015) refer to their approach as unsupervised, as they do not use unlabeled data. However, their method does not involve any learning and relies on matching heuristics. Some aspects of their approach (e.g., using Wikipedia link statitics) resemble our candidate generation stage. So, in principle, their approach could be compared to the ‘no-disambiguation’ baselines (sc) in Table 3. Their evaluation set-up is not standard. Our model (but not the estimation method) bears similarities to the approaches of Le and Titov (2018) and Globerson at al. (2016). Both these supervised approaches are global and use attention. 1943 6 Conclusions In this paper we proposed a weakly-supervised model for entity linking. The model was trained on unlabeled documents which were automatically annotated using Wikipedia. Our model substantially outperforms previous methods, which used the same form of supervision, and rivals fullysupervised models trained on data specifically annotated for the entity-linking problem. This result may be interpreted as suggesting that humanannotated data is not beneficial for entity linking, given that we have Wikipedia and web links. However, we believe that the two sources of information are likely to be complementary. In the future work we would like to consider setups where human-annotated data is combined with naturally occurring one (i.e. distantly-supervised one). It would also be interesting to see if mistakes made by fully-supervised systems differ from the ones made by our system and other Wikipediabased linkers. Acknowledgments We would like to thank anonymous reviewers for their suggestions and comments. The project was supported by the European Research Council (ERC StG BroadSem 678254), the Dutch National Science Foundation (NWO VIDI 639.022.518), and an Amazon Web Services (AWS) grant. References Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2007. Guiding semi-supervision with constraint-driven learning. In Proceedings of the 45th annual meeting of the association of computational linguistics, pages 280–287. Harr Chen, SRK Branavan, Regina Barzilay, and David R Karger. 2009. Content modeling using latent permutations. Journal of Artificial Intelligence Research, 36:129–163. Xiao Cheng and Dan Roth. 2013. Relational inference for wikification. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1787–1796, Seattle, Washington, USA. Association for Computational Linguistics. Andrew Chisholm and Ben Hachey. 2015. Entity disambiguation with web links. Transactions of the Association of Computational Linguistics, 3:145–156. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 31(1):25–70. Miao Fan, Qiang Zhou, and Thomas Fang Zheng. 2015. Distant supervision for entity linking. Proceedings of PACLIC. Evgeniy Gabrilovich, Michael Ringgaard, and Amarnag Subramanya. 2013. Facc1: Freebase annotation of clueweb corpora. Octavian-Eugen Ganea and Thomas Hofmann. 2017. Deep joint entity disambiguation with local neural attention. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2609–2619. Association for Computational Linguistics. Amir Globerson, Nevena Lazic, Soumen Chakrabarti, Amarnag Subramanya, Michael Ringaard, and Fernando Pereira. 2016. Collective entity resolution with multi-focal attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 621–631. Association for Computational Linguistics. Zhaochen Guo and Denilson Barbosa. 2016. Robust named entity disambiguation with random walks. Semantic Web, (Preprint). Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen F¨urstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 782–792, Edinburgh, Scotland, UK. Association for Computational Linguistics. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 541–550, Portland, Oregon, USA. Association for Computational Linguistics. Nevena Lazic, Amarnag Subramanya, Michael Ringgaard, and Fernando Pereira. 2015. Plato: A selective context model for entity resolution. Transactions of the Association for Computational Linguistics, 3:503–515. Phong Le and Ivan Titov. 2018. Improving Entity Linking by Modeling Latent Relations between Mentions. Proceedings of ACL. V. I. Levenshtein. 1966. Binary Codes Capable of Correcting Deletions, Insertions and Reversals. Soviet Physics Doklady, 10:707. David Milne and Ian H Witten. 2008. Learning to link with wikipedia. In Proceedings of the 17th ACM conference on Information and knowledge management, pages 509–518. ACM. 1944 Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003–1011. Association for Computational Linguistics. Ani Nenkova. 2008. Entity-driven rewrite for multidocument summarization. In IJCNLP. Lev Ratinov, Dan Roth, Doug Downey, and Mike Anderson. 2011a. Local and global algorithms for disambiguation to wikipedia. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1375–1384. Association for Computational Linguistics. Lev Ratinov, Dan Roth, Doug Downey, and Mike Anderson. 2011b. Local and global algorithms for disambiguation to wikipedia. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1375–1384. Association for Computational Linguistics. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 148–163. Springer. Sameer Singh, Amarnag Subramanya, Fernando Pereira, and Andrew McCallum. 2012. Wikilinks: A large-scale cross-document coreference corpus labeled via links to wikipedia. University of Massachusetts, Amherst, Tech. Rep. UM-CS-2012, 15. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pages 455–465. Association for Computational Linguistics. Martin J Wainwright, Michael I Jordan, et al. 2008. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1–2):1–305. Han Wang, Jin Guang Zheng, Xiaogang Ma, Peter Fox, and Heng Ji. 2015. Language and domain independent entity linking with quantified collective validation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 695–704. Association for Computational Linguistics. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association of Computational Linguistics, 6:287–302. Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the embedding of words and entities for named entity disambiguation. In Proceedings of CoNLL. A Model details To compute ˆs, we combine s with pwiki as below: ˆs(ei|D) = f s(ei|D), pwiki(ei|mi)  (2) where f is a one-hidden layer neural network (in our experiment, the number of hidden neurons is 100). Our final model is the sum of ˆs and sc (i.e., ˆs + sc) where sc is computed by a linear combination of: • d(ei, mi), the string similarity score between the title of ei and mi, using Levenshtein algorithm, • pwiki(ei|mi), and • rwiki(ei|D). In other words we have: sc(ei|D) =α × d(ei, mi)+ β × pwiki(ei|mi) + γ × rwiki(ei|D) (3) We tune α, β, γ on the development set. B Candidate selection In a nutshell, our method to automatically annotate raw texts is summarized in Algorithm 1. The algorithm receives a list of mentions and contexts D = {(m1, c1), (m2, c2), ..., (mM, cM)}. For each mi, ci, it will compute a list of positive candidates E+ i and a list of negative candidates E− i . C Experiments: hyper-parameter choice The values of the model hyper-parameters are shown in Table 5. For our baseline sc, α, β, γ are 0.1, 1., and 0.95 respectively. 1945 Input: D = {(m1, c1), ..., (mM, cM)}, n ∈N Output: (E+ 1 , E− 1 ), (E+ 2 , E− 2 ), ..., (E+ M, E− M): list of positive and negative candidates for (mi, ci) ∈D do compute pwiki(ei|mi), qwiki(ei|mi, ci) and rwiki(ei|D); E30 ←30 candidates with the highest pwiki(ei|mi); Ei ←4 candidates with the highest pwiki(ei|mi) and 3 candidates with the highest qwiki(ei|mi, ci) among E30; E+ i ←2 candidates in Ei with the highest rwiki(ei|D) E− i ←Ei \ E+ i end Algorithm 1: Automatically annotate a raw document hyper-parameter value Model de, dw (entity and word embedding dimension) 300 window size 50 number of hidden neurons in f (in Equation 2) 100 mini-batch size 1 document δ (margin) 0.1 learning rate 0.001 α (in Equation 3) 0.2 β (in Equation 3) 0.2 γ (in Equation 3) 0.05 number of updates for early stopping 50,000 Candidate selection l (max distance between two entities) 100 −∆ -1,000 number of raw document for training 30,000 |E+ i | number of kept candidates for training 2 |E+ i | number of kept candidates for testing 3 number of LBP loops 10 Table 5: The values of the model hyper-parameters
2019
187
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1946–1956 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1946 Pre-Learning Environment Representations for Data-Efficient Neural Instruction Following David Gaddy and Dan Klein Computer Science Division University of California, Berkeley {dgaddy,klein}@berkeley.edu Abstract We consider the problem of learning to map from natural language instructions to state transitions (actions) in a data-efficient manner. Our method takes inspiration from the idea that it should be easier to ground language to concepts that have already been formed through pre-linguistic observation. We augment a baseline instruction-following learner with an initial environment-learning phase that uses observations of language-free state transitions to induce a suitable latent representation of actions before processing the instruction-following training data. We show that mapping to pre-learned representations substantially improves performance over systems whose representations are learned from limited instructional data alone. 1 Introduction In the past several years, neural approaches have become increasingly central to the instruction following literature (e.g. Misra et al., 2018; Chaplot et al., 2018; Mei et al., 2016). However, neural networks’ powerful abilities to induce complex representations have come at the cost of data efficiency. Indeed, compared to earlier logical formbased methods, neural networks can sometimes require orders of magnitude more data. The datahungriness of neural approaches is not surprising – starting with classic logical forms improves data efficiency by presenting a system with pre-made abstractions, where end-to-end neural approaches must do the hard work of inducing abstractions on their own. In this paper, we aim to combine the power of neural networks with the dataefficiency of logical forms by pre-learning abstractions in a semi-supervised way, satiating part of the network’s data hunger on cheaper unlabeled data from the environment. When neural nets have only limited data that Figure 1: After seeing this transition, a neural net might generalize this action as stack red blocks to the right of blue blocks except for on brown blocks, but a generalization like stack red blocks on orange blocks is more plausible and generally applicable. We aim to guide our model towards more plausible generalizations by pre-learning inductive biases from observations of the environment. pairs language with actions, they suffer from a lack of inductive bias, fitting the training data but generalizing in ways that seem nonsensical to humans. For example, a neural network given the transition shown in Figure 1 might map the corresponding instruction to an adequate but unlikely meaning that red blocks should be stacked to the right of blue blocks except for on brown blocks. The inspiration for this work comes from the idea that humans avoid spurious hypotheses like this example partly because they have already formed a set of useful concepts about their environment before learning language (Bloom, 2000; Hespos and Spelke, 2004). These pre-linguistic abstractions then constrain language learning and help generalization. With this view in mind, we allow our instruction following agent to observe the environment and build a representation of it prior to seeing any linguistic instructions. In particular, we adopt a semisupervised setup with two phases, as shown in Figure 2: an environment learning phase where the system sees samples of language-free state transitions from actions in the environment, and a language learning phase where instructions are given along with their corresponding effects on the en1947 (i) Environment learning (ii) Language learning Figure 2: Diagram of the network modules during the environment learning and language learning phases. s and s′ represent states before and after an action, c represents a natural language command, and a represents a latent action representation. The environment learning phase (i) uses a conditional autoencoder to pre-train the decoder D toward a good representation space for a, so that fewer linguistic examples are needed during language learning (ii). vironment. This setup applies when interactions with the environment are plentiful but only a few are labeled with language commands. For example, a robotic agent could passively observe a human performing a task, without requiring the human to perform any work they would not normally do, so that later the agent would need less direct instruction from a human in the form of language. We present an environment learning method that uses observations of state transitions to build a representation that aligns well with the transitions that tend to occur. The method takes advantage of the fact that in complex environments (or even relatively simple ones), not every state transition is equally likely, but the patterns of actions that do occur hint at an underlying structure that we can try to capture. We demonstrate the effectiveness of our pretrained representations by using them to increase data efficiency on two instruction-following tasks (Section 4). We show that when given few instruction examples, a network using our pre-learned representations performs substantially better than an otherwise identical network without these representations, increasing performance by over ten absolute percentage points on small datasets and increasing data-efficiency by more than an order of magnitude. We find that while performance with a typical neural representation trained end-to-end lags considerably behind performance with human-designed representations, our unsupervised representations are able to help cross a substantial portion of this gap. In addition, we perform analysis of the meaning captured by our representations during the unsupervised environment learning phase, demonstrating that the semantics captured has noteworthy similarity to a hand-defined system of logical forms (Section 7). 2 Problem Setup This work applies to the class of problems where instructions are mapped to actions conditioned on an environment state. These tasks can be formalized as learning a mapping M(s, c) 7→s′, where c is a command in natural language, s is an environment state, and s′ is the desired environment state after following the command. Classically, these problems are approached by introducing a logical form l that does not depend on the state s, and learning a mapping from language to logical forms P(c) 7→l (Artzi and Zettlemoyer, 2013; Zettlemoyer and Collins, 2005). A hand-defined execution function Q(s, l) 7→s′ is then used to generalize the action across all possible states. When P and Q are composed, Q constrains the overall function to generalize in a semantically coherent way across different states. In contrast to the logical form-based method, our work builds off of an end-to-end neural approach, which is applicable in settings where a system of logical forms is not provided. We structure our network as a language module L and action decoder D in an encoder-decoder style architecture (Figure 2(ii)), similar to previous neural instruction following work (e.g. Mei et al., 2016). L and D are analogous to the P and Q functions in the logical form approach, however, unlike before, the interface between the two modules is a vector a and the function D is learned. This gives the neural network greater flexibility, but also cre1948 ates the problem that the decoder D is no longer constrained to generalize across different states in natural ways. 3 Method 3.1 Learning Action Representations from the Environment The goal of this paper is improve data efficiency by pre-training the decoder D to use a bettergeneralizing representation for the vector a. We do this in an unsupervised way by allowing our system to see examples of state transitions (actions) in the environment before seeing any language. We suppose the existence of a large number of language-free state transitions s, s′ and introduce an environment learning phase to learn representations of these transitions before language learning starts. During this environment learning phase, we train a conditional autoencoder of s′ given s by introducing an additional encoder E(s, s′) 7→a to go along with decoder D(s, a) 7→s′, as shown in Figure 2(i). Both E and D are given the initial state s, and E must create a representation of the final state s′ so that D can reproduce it from s. The parameters of E and D are trained to maximize log likelihood of s′ under the output distribution of D. arg max θE,θD  log PD(s′|s, E(s, s′)  (1) If given enough capacity, the representation a might encode all the information necessary to produce s′, allowing the decoder to ignore s. However, with a limited representation space, the decoder must learn to integrate information from a and s, leading a to capture an abstract representation of the transformation between s and s′. To be effective, the representation a needs to be widely applicable in the environment and align well with the types of state transitions that typically occur. These pressures cause the representation to avoid meanings like to the right of blue except for on brown that rarely apply. Note that during pretraining, we do not add any extra information to indicate that different transitions might be best represented with the same abstract action, but the procedure described here ends up discovering this structure on its own. Later, after demonstrating the effectiveness of this environment learning procedure in Section 4, we introduce two additional improvements to the procedure in sections 5 and 6. In Section 7, we show that our pre-training discovers representations that align well with logical forms when they are provided. 3.2 Language Learning After environment learning pre-training, we move to the language learning phase. In the language learning phase, we are given state transitions paired with commands (s, s′, c) and learn to map language to the appropriate result state s′ for a given state s. As discussed above and shown in Figure 2(ii), we form an encoder-decoder using a language encoder L and action decoder D. To improve generalization, we use the decoder D that was pre-trained during environment learning. If D generalizes representations across different states in a coherent way as we hope, then the composed function D(s, L(c)) will also generalize well. We can either fix the parameters of D after environment learning or simply use the prelearned parameters as initialization, which will be discussed more in the experiments section below. The language module L is trained by differentiating through the decoder D to maximize the log probability that D outputs the correct state s′. arg max θL  log PD(s′|s, L(c))  (2) 3.3 Comparison with Action Priors One of the roles of environment learning pretraining is to learn something like a prior over state transitions, ensuring that we select a reasonable action based on the types of transitions that we have seen. However, the method described here has advantages over a method that just learns a transition prior. In addition to representing which transitions are likely, our pre-training method also induces structure within the space of transitions. A single action representation a can be applied to many different states to create different transitions, effectively creating a group of transitions. After training, this grouping might come to represent a semantically coherent category (see analysis in Section 7). This type of grouping information may not be easily extractable from a prior. For example, a prior can tell you that stacking red blocks on orange blocks is likely across a range of initial configurations, but our pre-training method may also choose to represent all of these transitions with the same vector a. Finding this underly1949 ing structure is key to the generalization improvements seen with our procedure. 4 Experiments We evaluate our method in two different environments, as described below in sections 4.1 and 4.2.1 4.1 Block Stacking For our first test environment, we use the block stacking task introduced by Wang et al. (2016) and depicted in Figure 1. This environment consists of a series of levels (tasks), where each level requires adding or removing blocks to get from a start configuration to a goal configuration. Human annotators were told to give the computer step by step instructions on how to move blocks from one configuration to the other. After each instruction, the annotator selected the desired resulting state from a list. Following the original work for this dataset (Wang et al., 2016), we adopt an online learning setup and metric. The data is broken up into a number of sessions, one for each human annotator, where each session contains a stream of commands c paired with block configuration states s. The stream is processed sequentially, and for each instruction the system predicts the result of applying command c to state s, based on a model learned from previous examples in the stream. After making a prediction, the system is shown the correct result s′ and is allowed to make updates to its model before moving on to the next item in the stream. The evaluation metric, online accuracy, is then the percentage of examples for which the network predicted the correct resulting state s′ when given only previous items in the stream as training. Under this metric, getting predictions correct at the beginning of the stream, when given few to no examples, is just as important as getting predictions correct with the full set of data, making it as much a measure of data-efficiency as of final accuracy. The longest sessions only contain on the order of 100 training examples, so the bulk of predictions are made with only tens of examples. To train a neural model in this framework, the model is updated by remembering all previous examples seen in the stream so far and training the neural network to convergence on the full set of prior examples. While training the network to con1Code for all experiments can be found at github.com/dgaddy/environment-learning. vergence after every example is not very computationally efficient, the question of making efficient online updates to neural networks is orthogonal to the current work, and we wish to avoid any confounds introduced by methods that make fewer network updates. Since the original dataset does not contain a large number of language-free state transitions as we need for environment learning, we generate synthetic transitions. To generate state transitions s, s′, we generate new levels using the random procedure used in the original work and programmatically determine a sequence of actions that solve them. The levels of the game are generated by a procedure which selects random states and then samples a series of transformations to apply to generate a goal state. We create a function that generates a sequence of states from the start to the goal state based on the transformations used during goal generation. Most of the levels require one or two actions with simple descriptions to reach the goal. Following the assumption that state transitions in the environment are plentiful, we generate new transitions for every batch during environment learning. We leave an analysis of the effect of environment learning data size to future work. 4.1.1 State Representation and Network Architecture We represent a state as a two dimensional grid, where each grid cell represents a possible location (stack index and height) of a block. The state inputs to the encoder and decoder networks use a one-hot encoding of the block color in each cell or an empty cell indicator if no block is present. The output of the decoder module is over the same grid, and a softmax over colors (or empty) is used to select the block at each position. Note that the original work in this environment restricted outputs to states reachable from the initial state by a logical form, but here we allow any arbitrary state to be output and the model must learn to select from a much larger hypothesis space. The encoder module E consists of convolutions over the states s and s′, subtraction of the two representations, pooling over locations, and finally a fully connected network which outputs the representation a. The decoder module D consists of convolution layers where the input is state s and where a is broadcast across all positions to an intermediate layer. The language module L runs an LSTM over the words, then uses a fully connected 1950 network to convert the final state to the representation a. Details of the architecture and hyperparameters can be found in Appendix A.1. 4.1.2 Results Our primary comparison is between a neural network with pre-trained action representations and an otherwise identical neural model with no pretrained representations. The neural modules are identical, but in the full model we have fixed the parameters of the decoder D after learning good representations with the environment learning procedure. We tune the baseline representation size independently since it may perform best under different conditions, choosing among a large range of comparable sizes (details in Appendix A.1). To evaluate the quality of our representations, we also compare with a system using hand-designed logical representations (Wang et al., 2016). While not strictly an upper bound, the human-designed representations were designed with intimate knowledge of the data environment and so provide a very good representation of actions people might take. This makes them a strong point of comparison for our unsupervised action representations. Table 1 shows the results on this task. We find that training the action representation with environment learning provides a very large gain in performance over an identical network with no prelinguistic training, from 17.9% to 25.9%. In sections 5 and 6 below, we’ll add discrete representations and an additional loss term which together bring the accuracy to 28.5%, an absolute increase of more than 10% over the baseline. Comparing against the system with human-designed representations shows that the environment learning pre-training substantially narrows the performance gap between hand designed representations and representations learned as part of an end-to-end neural system. 4.2 String Manipulation The second task we use to test our method is string manipulation. In this task a state s is a string of characters and actions correspond to applying a transformation that inserts or replaces characters in the string, as demonstrated in Figure 3. We use the human annotations gathered by Andreas et al. (2018), but adapt the setup to better measure dataefficiency. The baseline neural model was unable to learn useful models for this task using data sizes approLearned Representations (this work) Baseline 17.9 Environment Learning 25.9 + Discrete a (Section 5) 27.6 + Encoder matching (Section 6) 28.5 Human-Designed Representations Wang et al. (2016) 33.8 Table 1: Online accuracy for the block stacking task.2 Pre-learning action representations with environment learning greatly improves performance over the baseline model, substantially narrowing the gap between hand designed representations and representations learned as part of an end-to-end neural system. Note that these numbers represent accuracy after learning from only tens of examples. priate for the online learning setup we used in the previous task, so we instead adopt a slightly different evaluation where accuracy at different data sizes is compared. We structure the data for evaluation as follows: First, we group the data so that each group contains only a small number of instructions (10). In the original data, each instruction comes with multiple example strings, so we create distinct datapoints s, s′, c for each example with the instruction string repeated. Our goal is to see how many examples are needed for a model to learn to apply a set of 10 instructions. We train a model on training sets of different sizes and evaluate accuracy on a held-out set of 200 examples. We are primarily interested in generalization across new environment states, so the held-out set consists of examples with the same instructions but new initial states s. Due to high data requirements of the baseline neural system, we found it necessary to augment the set of examples for each instruction with additional generated examples according to the regular expressions included with the dataset. Our final metric is the average accuracy across 5 instruction groups, and we plot this accuracy for different training set sizes. State transitions for environment learning are generated synthetically by selecting words from a dictionary and applying regular expressions, where the regular expressions to apply were sampled from a regular-expression generation procedure written by the creators of the original dataset. The environment learning procedure is exposed to 2Although the variance between runs was small relative to the gaps in performance, we report an average over three random initializations to ensure a fair comparison. 1951 c replace consonants with p x s fines s′ pxipxepx c add a letter k before every b s rabbles s′ rakbkbles c replace vowel consonant pairing with v g s thatched s′ thvgchvg c add b for the third letter s thanks s′ thbanks Figure 3: Examples from the string manipulation task along with desired outputs. transitions from thousands of unique regular expressions that it must make sense of and learn to represent. 4.2.1 Network Architecture For this task, the state inputs and outputs are represented as sequences of characters. The encoder E runs a LSTM over the character sequences for s and s′, then combines the final states with a feedforward network to get a. The decoder D runs a LSTM over the characters of s, combines this with the representation a, then outputs s′ using another LSTM. The module architecture details and hyperparameters can be found in Appendix A.2. Since our evaluation for this task considers larger dataset sizes in addition to very small sizes, we do not fix the parameters of the decoder D as we did in the previous task, but instead use the pre-trained decoder as initialization and train it along with the language module parameters. Allowing the parameters to train gives the decoder more power to change its representations when it has enough data to do so, while the initialization helps it generalize much better, as demonstrated by our results below. 4.2.2 Results As with the other dataset (Section 4.1), we compare the full model with a baseline that has no environment learning, but an otherwise identical architecture. To ensure a fair comparison, we tune the baseline representation size separately, choosing the best from a range of comparable sizes (see Appendix A.2). Figure 4 plots the accuracy across different data Figure 4: Accuracy for the string manipulation task as the number of examples (s, s′, c) is increased. Environment learning pre-training increases data efficiency by an order of magnitude or more. The results in yellow include additional improvements described in sections 5 and 6 below. sizes of the baseline neural model and the model with environment learning pre-training. Note that models are trained to convergence, so this plot is intended to indicate data efficiency, not training speed (though training speed is also likely to increase at similar rates). As seen in the figure, environment learning substantially increases data efficiency on this task. At small data sizes, the baseline model struggles to generalize across different states s, often choosing to output one of the training outputs s′ rather than learning a rule and applying it to s. Environment learning greatly increases the ability of the model to find the correct generalization. 5 Discrete Action Representations In this section, we describe a variant of our model where we use a discrete representation a instead of a continuous one and evaluate this variant on our two tasks. Semantics is often defined in terms of discrete logical structures. Even in continuous environments, it is often natural to describe objects and relations in discrete ways. Using a discrete space for our learned action representations can provide useful inductive bias for capturing this discrete structure. In addition, a discrete representation has the potential advantages of increased robustness and increased control of information flow during environment learning. When using discrete representations, we divide our a into n different discrete random variables where each variable selects its value from one of k 1952 categories. We train the discrete representation using the Gumbel-Softmax (Jang et al., 2017; Maddison et al., 2017), which gives us a continuous relaxation of the discrete variables that we can backpropagate through. The Gumbel-Softmax operation transforms an n × k vector into n discrete random variables, which we represent as one-hot vectors and feed through the rest of the network just as we would a continuous representation. The Gumbel-Softmax is calculated as G(xi) = exp(xi + ϵi) Pk j=0 exp(xj + ϵj) where ϵ are i.i.d. samples from the Gumbel(0,1) distribution and the vector x represents unnormalized log probabilities for each of the k categories. This operation is analogous to a softening of a sample from the distribution. While the original work suggested the use of an annealed temperature parameter, we did not find it necessary in our experiments. We use the straight-through variant, where the discrete mode of the softmax distribution is used in the forward pass, but the backward pass is run as if we had used the continuous value G(xi). We found that a representation with n = 20 variables and k = 30 values works well for all our experiments. Using discrete representations instead of continuous representations further improves environment learning results on both tasks, increasing the block stacking task accuracy from 25.9% to 27.6% (Table 1) and improving string manipulation on moderate training sizes (200 examples) from 24.7% to 36.9%. We also ran the baseline neural models with discrete representations for comparison but did not observe any performance gains, indicating that the discrete representations are useful primarily when used with environment learning pre-training. 6 Encoder Representation Matching One potential difficulty that may occur when moving from the environment learning to the language learning phase is that the language module L could choose to use parts of the action representation space that were not used by the encoder during environment learning. Because the decoder has not seen these representations, it may not have useful meanings associated with them, causing it to generalize in a suboptimal way. In this section, we introduce a technique to alleviate this problem and show that it can lead to an additional improvement in performance. Our fix uses an additional loss term to encourage the language module L to output representations that are similar to those used by the encoder E. For a particular input c, s, s′ in the language learning phase, we run the encoder on s, s′ to generate a possible representation aE of this transition. We then add an objective term for the log likelihood of aE under L’s output distribution. The full objective during language learning is then arg max θL  log PD(s′|s, L(c)) + λ log PL(aE|c)  (3) where the encoder matching weight λ is a tuned constant. PL is the softmax probability from the output of the language module when using discrete representations for a, and aE is the discrete mode of the encoder output distribution.3 Using this technique on the block stacking task (with λ = .01), we see a performance gain of .9% over discrete-representation environment learning to reach an accuracy of 28.5%. This number represents our full model performance and demonstrates more than 10% absolute improvement over the baseline. The additional loss also provides gains on string manipulation, especially on very small data sizes (e.g. from 3.9% to 14.8% with only 10 examples). The performance curve of our complete model is shown in Figure 4. With our full model, it takes less than 50 examples to reach the same performance as with 1000 examples using a standard neural approach. 7 Exploring the Learned Representation A primary goal of the environment learning procedure is to find a representation of actions that generalizes in a semantically minimal and coherent way. In this section, we perform analysis to see what meanings the learned action representations capture in the block stacking environment. Since logical forms are engineered to capture semantics that we as humans consider natural, we compare our learned representations with a system of logical forms to see if they capture similar meanings without having been manually constrained to do 3When using continuous representations, a ℓ2 distance penalty could be used to encourage similarity between the output of L and E, though this tended to be less effective in our experiments. 1953 so. We compare the semantics of the learned and logical representations by comparing their effect on different states, based on the method of Andreas and Klein (2017). We test an encoder and decoder using the following procedure: First, we generate a random transition s1, s′ 1 from the same distribution used for environment learning and run the encoder to generate an action representation a1 for this transition. Then, we generate a new state s2 from the environment and run the decoder on the new state with the representation generated for the original state: D(a1, s2) 7→¯s2. We are interested in whether the output ¯s2 of this decoding operation corresponds to a generalization that would be made by a simple logical form. Using a set of logical forms that correspond to common actions in the block stacking environment, we find all simple logical forms that apply to the original transition s1, s′ 1 and all forms that apply to the predicted transition s2, ¯s2. If the intersection of these two sets of logical forms is non-empty, then the decoder’s interpretation of the representation a1 is consistent with some simple logical form. We repeat this procedure on 10,000 state transitions to form a logical form consistency metric. Running this test on our best-performance model, we find that 84% of the generalizations are consistent with one of the simple logical forms we defined. This result indicates that while the generalization doesn’t perfectly match our logical form system, it does have a noteworthy similarity. An inspection of the cases that did not align with the logical forms found that the majority of the “errors” could in fact be represented by logical forms, but ones that were not minimal. In these cases, the generalization isn’t unreasonable, but has slightly more complexity than is necessary. For example, from a transition that could be described either as stack a blue block on the leftmost block or separately as stack blue blocks on red blocks (where red only appears in the leftmost position), the representation a that is generated generalizes across different states as the conjunction of these two meanings (stack blue blocks on the leftmost block AND on red blocks), even though no transitions observed during environment learning would need this extra complexity to be accurately described. 8 Related Work Many other works use autoencoders to form representations in an unsupervised or semi-supervised way. Variants such as denoising autoencoders (Vincent et al., 2008) and variational autoencoders (Kingma and Welling, 2013) have been used for various vision and language tasks. In the area of semantic grounding, Koˇcisk´y et al. (2016) perform semi-supervised semantic parsing using an autoencoder where the latent state takes the form of language. Our approach also relates to recent work on learning artificial languages by simulating agents interacting in an environment (Mordatch and Abbeel, 2018; Das et al., 2017; Kottur et al., 2017, i.a.). Our environment learning procedure could be viewed as a language learning game where the encoder is a speaker and the decoder is a listener. The speaker must create a “language” a that allows the decoder to complete a task. Many of these papers have found that it is possible to induce representations that align semantically with language humans use, as explored in detail in Andreas and Klein (2017). Our analysis in Section 7 is based on the method from this work. Model-based reinforcement learning is another area of work that improves data-efficiency by learning from observations of an environment (Wang et al., 2018; Deisenroth et al., 2013; Kaiser et al., 2019). It differs from the current work in which aspect of the environment it seeks to capture: in model-based RL the goal is to model which states will result from taking a particular action, but in this work we aim to learn patterns in what actions tend to be chosen by a knowledgeable actor. Another related line of research uses language to guide learning about an environment (Branavan et al., 2012; Srivastava et al., 2017; Andreas et al., 2018; Hancock et al., 2018). These papers use language to learn about an environment more efficiently, which can be seen as a kind of inverse to our work, where we use environment knowledge to learn language more efficiently. Finally, recent work by Leonandya et al. (2018) also explores neural architectures for the block stacking task we used in section 4.1. The authors recognize the need for additional inductive bias, and introduce this bias by creating additional synthetic supervised data with artificial language, creating a transfer learning-style setup. This is in 1954 contrast to our unsupervised pre-training method that does not need language for the additional data. Even with their stronger data assumptions, their online accuracy evaluation reaches just 23%, compared to our result of 28.5%, providing independent verification of the difficulty of this task for neural networks. 9 Conclusion It is well known that neural methods do best when given extremely large amounts of data. As a result, much of AI and NLP community has focused on making larger and larger datasets, but we believe it is equally important to go the other direction and explore methods that help performance with little data. This work introduces one such method. Inspired by the idea that it is easier to map language to pre-linguistic concepts, we show that when grounding language to actions in an environment, pre-learning representations of actions can help us learn language from fewer languageaction pairings. Acknowledgments This work is supported by the DARPA Explainable Artificial Intelligence (XAI) program. We would like to thank the members of the Berkeley NLP group and the anonymous reviewers for their helpful feedback. References Jacob Andreas and Dan Klein. 2017. Analogs of linguistic structure in deep representations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2893– 2897. Association for Computational Linguistics. Jacob Andreas, Dan Klein, and Sergey Levine. 2018. Learning with latent language. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2166–2179. Association for Computational Linguistics. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics, 1:49–62. Paul Bloom. 2000. How children learn the meanings of words. MIT Press. SRK Branavan, David Silver, and Regina Barzilay. 2012. Learning to win by reading manuals in a monte-carlo framework. Journal of Artificial Intelligence Research, 43:661–704. Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, and Ruslan Salakhutdinov. 2018. Gatedattention architectures for task-oriented language grounding. In AAAI. Abhishek Das, Satwik Kottur, Jos´e MF Moura, Stefan Lee, and Dhruv Batra. 2017. Learning cooperative visual dialog agents with deep reinforcement learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 2951–2960. Marc Peter Deisenroth, Gerhard Neumann, Jan Peters, et al. 2013. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1–2):1–142. Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher R´e. 2018. Training classifiers with natural language explanations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1884– 1895. Association for Computational Linguistics. Susan J Hespos and Elizabeth S Spelke. 2004. Conceptual precursors to language. Nature, 430(6998):453. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In International Conference on Learning Representations. Lukasz Kaiser, Mohammad Babaeizadeh, Piotr Milos, Blazej Osinski, Roy H Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, et al. 2019. Modelbased reinforcement learning for atari. arXiv preprint arXiv:1903.00374. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. Tom´aˇs Koˇcisk´y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with semi-supervised sequential autoencoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1078– 1087. Association for Computational Linguistics. Satwik Kottur, Jos´e Moura, Stefan Lee, and Dhruv Batra. 2017. Natural language does not emerge ‘naturally’ in multi-agent dialog. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2962–2967. Association for Computational Linguistics. Rezka Leonandya, Elia Bruni, Dieuwke Hupkes, and Germ´an Kruszewski. 2018. The fast and the flexible: training neural networks to learn to follow instructions from small data. arXiv preprint arXiv:1809.06194. 1955 Chris J Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The concrete distribution: A continuous relaxation of discrete random variables. In International Conference on Learning Representations. Hongyuan Mei, Mohit Bansal, and Matthew R Walter. 2016. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In AAAI, volume 1, page 2. Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, and Yoav Artzi. 2018. Mapping instructions to actions in 3d environments with visual goal prediction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2667–2678. Association for Computational Linguistics. Igor Mordatch and Pieter Abbeel. 2018. Emergence of grounded compositional language in multi-agent populations. In Thirty-Second AAAI Conference on Artificial Intelligence. Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2017. Joint concept learning and semantic parsing from natural language explanations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1527–1536. Association for Computational Linguistics. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103. ACM. Sida I. Wang, Percy Liang, and Christopher D. Manning. 2016. Learning language games through interaction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2368–2378. Association for Computational Linguistics. Xin Wang, Wenhan Xiong, Hongmin Wang, and William Yang Wang. 2018. Look before you leap: Bridging model-free and model-based reinforcement learning for planned-ahead vision-andlanguage navigation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 37–53. Luke Zettlemoyer and Mike Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI), pages 658– 666. A Neural Architectures and Hyperparameters A.1 Block Stacking The encoder and decoder module architectures for the block stacking task are shown in Figure 5. The encoder module E consists of convolutions over the states s and s′, subtraction of the two representations, pooling over locations, and finally a fully connected network which outputs the representation a. The fully connected network has a single hidden layer. The decoder module D consists of two convolution layers where the input is state s and where a is broadcast across all positions and concatenated with the input to the second layer. All convolutions and feedforward layers for E and D have dimension 200 and all intermediate layers are followed by ReLU non-linearities. Dropout with probability 0.5 was used on the encoder feedforward hidden layer and before the last convolution layer in the decoder. The language module L uses a LSTM encoder (Figure 6). It takes a command c as a sequence of learned word embeddings, runs an LSTM over them, then projects from the final cell state to get the output vector a. The word embeddings have dimension 100 and the LSTM has hidden size 200. When using a continuous action representation, a has dimension 600. When using a discrete representation, we use n = 20 discrete variables where each takes one of k = 30 values. Environment learning is run on 500,000 batches of size 20, after which we fix the parameters of D. During language learning, we optimize L for 50 epochs after each new example is presented, using a batch size of 1. All optimization is done using Adam with learning rate 0.001. To ensure a fair comparison with the baseline, we ran the baseline system with both continuous and discrete representations and took the best. Generally, the baseline performed slightly better (i) Encoder E (ii) Decoder D Figure 5: Architecture for block stacking task modules. 1956 Figure 6: The language module L used for both the block stacking and string manipulation tasks uses a LSTM over the words of the command c. with continuous representations. We ran with continuous sizes 20, 50, 100, 300, and 600; selecting the best result. This range was chosen to be between the number of discrete variables n and the total number of inputs to the discretization n × k. A.2 String Manipulation Figure 7 shows the encoder and decoder module architectures for the string manipulation task. The encoder E runs a LSTM over the character sequences for s and s′, using separate LSTMs for the two sequences, but tying their parameters. The final states of the two LSTMs are then concatenated and fed into a feedforward network with one hidden layer that outputs the action representation a. The decoder D consists of a LSTM over the sequence s, a feedforward network of a single linear layer combining a with the LSTM final state, and a LSTM that outputs the sequence s′, where the output LSTM’s initial state comes from the output of the feedforward network. a is also concatenated with the previous output embedding that is fed into the input of the LSTM at each timestep. The character embeddings input to the LSTM have dimension 50, and all LSTM and feedforward layers have dimension 500. When using a continuous representation a, we use a representation dimension of 20, though the results were not overly sensitive to this value. When using a discrete representation, we use n = 20 variables where each takes one of k = 30 values. The language module for this task is identical to the module used for the block stacking task, as shown in Figure 6. The training and optimizer hyperparameters are the same as in the block stacking task. As in the block stacking task, we tune the baseline representation hyperparameters over continuous sizes 20, 50, 100, 300, and 600, as well as an identical-sized discrete representation. (i) Encoder E (ii) Decoder D Figure 7: Architecture for string manipulation task modules.
2019
188
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1957–1968 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1957 Reinforced Training Data Selection for Domain Adaptation Miaofeng Liu♣∗†, Yan Song♠†, Hongbin Zou♦∗, and Tong Zhang♥ ♣MILA & DIRO, Universit´e de Montr´eal [email protected] ♠Tencent AI Lab [email protected] ♦School of Electronic Engineering, Xidian University [email protected] ♥The Hong Kong University of Science and Technology [email protected] Abstract Supervised models suffer from the problem of domain shifting where distribution mismatch in the data across domains greatly affect model performance. To solve the problem, training data selection (TDS) has been proven to be a prospective solution for domain adaptation in leveraging appropriate data. However, conventional TDS methods normally requires a predefined threshold which is neither easy to set nor can be applied across tasks, and models are trained separately with the TDS process. To make TDS self-adapted to data and task, and to combine it with model training, in this paper, we propose a reinforcement learning (RL) framework that synchronously searches for training instances relevant to the target domain and learns better representations for them. A selection distribution generator (SDG) is designed to perform the selection and is updated according to the rewards computed from the selected data, where a predictor is included in the framework to ensure a taskspecific model can be trained on the selected data and provides feedback to rewards. Experimental results from part-of-speech tagging, dependency parsing, and sentiment analysis, as well as ablation studies, illustrate that the proposed framework is not only effective in data selection and representation, but also generalized to accommodate different NLP tasks. 1 Introduction Learning with massive data suffers from “Pyrrhic victory” where huge amounts of resource, e.g., computation, annotation, storage, etc., are consumed with many issues, one of which is that data quality considerably affects the performance of learned models. Especially in natural language ∗This work was done during the internship of Miaofeng Liu and Hongbin Zou at Tencent AI Lab. † Corresponding authors. processing (NLP), such phenomenon is incredibly significant where noise and inaccurate annotations are demolishing models’ robustness when applying them across domains (Bollegala et al., 2011; Plank and Van Noord, 2011; Song and Xia, 2013; Ruder and Plank, 2018; Liu et al., 2018). Statistically, distribution mismatch is often observed between training and test data in such case. As a straightforward solution to reduce the impact of the mismatch, TDS is effective for learning across domains (Ruder and Plank, 2017) by preventing negative transfer from irrelevant samples and noisy labels (Rosenstein et al., 2005) while achieving equivalent performance with less computational efforts (Fan et al., 2017; Feng et al., 2018), especially when compared with learning-intensive domain adaptation methods such as sample reweighing (Borgwardt et al., 2006), feature distribution matching (Tzeng et al., 2014) and representation learning (Csurka, 2017). Although various TDS-based domain adaptation approaches were proposed for NLP tasks (Daum´e III, 2007; Blitzer et al., 2007a; Søgaard, 2011), most of them only consider scoring or ranking training data under a certain metric over the entire dataset, and then select the top n (or a proportion, which is a predefined hyper-parameter) items to learn. However, such pre-designed metrics are, always, neither able to cover effective characteristics for transferring domain knowledge nor can be applied in different data nature. Even though there exists a versatile metric, its hyper-parameter setting still demands further explorations. Moreover, conventional TDS is separate from model training, which requires more steps before an adapted model can be used, and restricts selecting appropriate instances when there is no feedback from the task. In doing so, the features or data representations of the selected instances are not adaptively learned and optimized, especially for neural models. Smarter TDS approaches are thus expected for domain adap1958 tation to accommodate different data and tasks. Consider that TDS is, in general, a combinatorial optimization problem with exponential complexity, it is impossible to try all possible combinations of training instances. An efficient solution to this problem is to transform it into a sequence of decision-making on whether select a (or a group of) training instance at each step, where previous decision should influence later ones. In this case, RL can be an appropriate vechile. To this end, one has to tackle two missions: to properly measure the correlation between a training sample and the target domain, and to guide the selection process with the feedback from the selected samples according to a specific task. For these missions, in this paper, we propose an RL framework for TDS that jointly learns the representation of the training data with respect to the target domain and selects them according to a learned distribution of selection probabilities. In detail, there are two major components in our framework: a selection distribution generator (SDG) for producing the selection probabilities, and a task-specific predictor including a feature extractor for learning data representations and a classifier1 for measuring the performance of the selected data. The SDG and the predictor are pipelined by taking each others’ output as their inputs and optimized accordingly via RL. With this framework, RL ensures the TDS process being conducted without requiring a predefined threshold and can automatically select the best instances in the training data as well as learn task- and domainspecific representations for them according to the target domain. As a result, useful information from the source domain is properly organized and represented and the redundant or noisy data are avoided in training the target domain specific models. Experimental results from three NLP tasks, namely, part-of-speech (POS) tagging, dependency parsing and sentiment analysis, illustrate that our approach achieves competitive performance, which confirm the validity and effectiveness of our approach. The code of this work is available at https: //github.com/timerstime/SDG4DA 2 The Approach We follow the common TDS setting in domain adaptation, i.e., for a task T , one taking labeled instances from a source domain DS as the pool, and 1It is not necessarily a classifier, e.g., such as a tagger. However we use the term classifier for simplicity. some unlabeled data from a target domain DT as the guidance. The routine of expected approaches for TDS is then to generate an optimal subset of data from the pool and train a model on it for T . Based on such routine, we design our approach with an architecture illustrated in Figure 1, with two major components, namely, the SDG and the predictor. The key component for TDS is the SDG, which produces a distribution vector based on the representation of the selected source data from the last selection step, then data instances are selected according to the vector and new reward is generated for next round of data selection. To update the SDG, different measurements can be used to assess the discrepancy between the representations of the selected source data and the guidance set and then approximates the value function for updating. The predictor takes the selected data and generates their representations in the feature extractor and trains a task-specific model by the classifier. The details of our framework is unfolded in the following subsections, in which we give the details of the two components and how they are jointly learned. 2.1 The Predictor The predictor is the main component to train a particular model for T . In our approach we decompose the predictor into two parts, the feature extractor and the classifier, and use them separately. The feature extractor serves as the representation learning module that transform selected data to vectors, while the classifier trains on the vector for T . In this study, the predictor is a neural model so that the aforementioned separation are conducted by splitting neural layers. Normally, the feature extractor is the first n-1 layers of the predictor with n layers in total; the classifier is then the last layer. The Feature Extractor Data in its original form, especially natural language, is usually difficult to be directly used in computation. The feature extractor thus serves as a critical component in our approach to transform the data into distributed representations for their efficient use. There are twoway inputs for the feature extractor. One is the guidance set XT g = {xT 1 , xT 2 , ..., xT m}, a collection of unlabeled data drawn from the target domain, serving as the reference for TDS. The other input is the selected data from the source domain in a “data bag”, which is a batch of a certain amount of instances to facilitate TDS in this study. In detail, let XS = {xS 1 , xS 2 , ..., xS n}, ∀xS i ∈XS denote the 1959 Figure 1: The architecture of our TDS framework, with a predictor (including a feature extractor and a classifier) and a selection distribution generator. All black solid arrows refer to data flow, while the red dashed arrow denotes reward with the orange dotted arrows indicating back-propagation of gradients from training the predictor. data from the source domain, we uniformly and randomly partitions the entire data set into N disjoint data bags marked as {B1, B2, ..., BN}, with Bj = {xS (j−1)n/N+1, xS (j−1)n/N+2, ..., xS jn/N} and j ∈{1, 2, ..., N}. Through the feature extractor, the guidance set and the selected data are transformed into two collections of distribution vectors. The Classifier When each TDS round is done, the classifier is trained on the representations of the selected data for T . During the training, the classifier passes the gradients to the feature extractor according to the labels of the selected data. The parameters of the classifier and the feature extractor are updated accordingly (with a learning rate β). 2.2 The Selection Distribution Generator A multi-layer perceptron (MLP) model is used as the SDG, which learns the selection policy optimized by the reward from the representations of the guidance set and the selected data by RL. In doing so, at each step, the SDG is fed by a collection of representations for a data bag from the feature extractor. We denote the collection ΦBj = {rj 1, rj 2, ..., rj |Bj|}, where rj l (l = 1, 2, ..., |Bj|) is the vector of the l-th sample2 in Bj.3 Then SDG maps ΦBj into a vector DBj = (pj 1, pj 2, ..., pj |Bj|), pj l (l = 1, 2, ..., |Bj|), which represents the probability for each instance on the confidence of select2Representations in the collection follow the same order of their corresponding data instances in the bag. 3Similarly, the collection of representations for the guidance set is denoted as Φt. ing it. To learn the SDG, each ΦBj is measured with Φt to give a reward in our framework, which is described in the following subsection. 2.3 The Reinforcement Learning Framework We jointly train the SDG and the predictor with policy gradient method (Sutton et al., 1999), which favors actions with high rewards from better selected instances. The entire learning process is described in Algorithm 1, in which the notations are described in the following texts. RL Components in Learning the SDG • State (s1, s2, ...sj, ...sN) includes a collection of states for all j with respect to N data bags, where each sj indicates a state including selected instances ˆBj sampled from Bj according to the distribution vector DBj, and parameters of the feature extractor for the ˆBj. For simplicity we use Φ ˆBj and Φt to represent state sj. • Action For each state, the action space A is a 01 judgment to decide if selecting an instance (1) or not (0). An action a = {ak}|Bj| k=1 ∈{0, 1}|Bj|, which is obtained from D ˆBj.4 After each action, the framework gives new Φ ˆBj, then transforms state s into s′. The policy is defined as PW(a|s). • Reward The mathematical goal of TDS is to ensure that the selected data fit the distribution of the target domain. Hence we set a reward 4The process of assigning the value, i.e., 1 or 0, to k-th element of a can be formulated by sampling from a Bernoulli distribution parameterized by pj k of DBj w.r.t. Bj. 1960 Algorithm 1: Joint training algorithm in our approach Input: Training data in bags B = {B1, B2, ..., BN}; epochs L; W (SDG), Ψ (predictor, including feature extractor Θ); Loss function of the predictor F(Ψ, ˆBj); nJ; d(·, ·); γ. Output: Updated W and Ψ (Θ). Initialize W, Ψ(Θ) with standard Gaussian distribution; for epoch l = 1 to L do Σ = 0; for k = 1 to nJ do Σr = 0; Shuffle {B1, B2, ..., BN}; for each Bj ∈B do Φ sj Bj ←Θj−1(Bj); Φ sj t ←Θj−1(XT g ); On current bag state sj, Dj ←W(Φ sj t ); select Φ sj ˆ Bj from Φ sj Bj via Dj (take action aj); r(sj−1, aj, sj) ← d(Φ sj−1 ˆ Bj−1, Φ sj−1 t ) −γd(Φ sj Bj, Φ sj t ) Σr ←Σr + γj−1r(sj−1, aj, sj) Ψ ←Ψ −β∇ΨF(Ψ, ˆBj); ( Θj ←Θj−1 −β∇ΘF(Ψ, ˆBj) ) ; end Σ ←Σ + PN j=1 ∇W log πW(ak j |sk j )Σr; end ∇W eJ(W) ←1 nJ Σ; W ←W + τ∇W eJ(W); end r(s, a, s′) to assess the distance between Φ ˆBj and Φt in the current state (s′) and its previous state (s): r(s, a, s′) = d(Φs ˆBj−1, Φs t) −γd(Φs′ ˆBj, Φs′ t ) (1) where d(·, ·) is a distribution discrepancy measurement, which can be implemented by different information-bearing functions. γ ∈(0, 1) is a discounting constant that decreases the impact from future distribution differences. Note that Eq. (1) is conducted in a sequential manner based on two adjacent data bags Bj−1 and Bj, of which Φs′ ˆBj is impacted by Φs ˆBj−1 via parameters Ψ of the feature extractor updated by ˆBj−1. Consequently, the state transition probability P(s′|s, a) is determined by stochastic optimization and other randomness in training, e.g., dropout (Srivastava et al., 2014). When better instances are selected, the reward is then expected to produce a higher value because the measurement for the previous state d(Φs ˆBj−1, Φs t) is supposed to give a larger distance between Φ ˆBj−1 and Φt than that for the current state. Distribution Discrepancy Measurements To measure each ˆBj and the XT g , let P = (p1, · · · , pn) be the normalized element-wise average of Φ ˆBj and Q the average of Φt similarly, we use the following measurements for d(·, ·): • JS: The Jensen-Shannon divergence (Lin, 1991), d(P, Q) = 1 2[DKL(P||M) + DKL(Q||M)] where DKL(P||Q) = Pn i=1 pi log pi qi , with M = 1 2(P + Q). • MMD: The maximum mean discrepancy (Borgwardt et al., 2006), d(P, Q) = ∥P −Q∥. • R ´ENYI: The symmetric R´enyi divergence (R´enyi, 1961), d(P, Q) = 1 2[Ry(P, M) + Ry(Q, M)], Ry(P, Q) = 1 α−1 log(Pn i=1 pα i qα−1 i ). We set α = 0.99 following Van Asch and Daelemans (2010). • LOSS: The guidance loss, defined as d = −1 m Pm i=1 P yt∈Yt yt log pΦ(yt|xT i ), where yt is the label of instance t from the guidance set, and pφ the learned conditional probability of the predictor. Note that, different from aforementioned measurements, LOSS requires labels from the target domain, thus is only set as a comparison to other measurements used in our approach. Optimization The following object is optimized to obtain the optimal distribution generation policy: J(W) = EPW(a|s)[ N X j=1 γj−1r(sj, aj)] (2) Then the parameters of the SDG, i.e., W, is updated via policy gradient (Sutton et al., 1999) by W ←W + τ∇W eJ(W) (3) where τ is the discounting learning rate5, the gradient ∇WJ(W) is approximated by ∇W eJ(W) = 1 nJ nJ X k=1 N X j=1 ∇W log πW(ak j |sk j ) N X j=1 γj−1r(sk j , ak j ), with j referring to the j-th step (corresponding to the j-th data bag) in RL, and k the k-th selection process to estimate ∇WJ(W), which is updated after every nJ times of selection over all N data bags, where nJ is a predefined hyper-parameter. 5τ and the aforementioned β can be self-adapted by the optimizer, such as Adam (Kingma and Ba, 2014). 1961 TASK POS TAGGING/DEPENDENCY PARSING SENTIMENT ANALYSIS DOMAIN A EM N R WB WSJ B D K E LABELED 3.5K 4.9K 2.4K 3.8K 2.0K 3.0K 2K 2K 2K 2K UNLABELED 27K 1,194K 1,000K 1,965K 525K 30K 4.5K 3.6K 5.7K 5.9K Table 1: Statistics of all datasets used in our experiments, with the number presenting labeled or unlabeled samples in each domain. The domain abbreviations in different tasks are explained as follows. A:Answer, EM:Email, N:News, R:Reviews, WB:Weblogs, WSJ:Wall Street Journal, and B:Book, D:DVD, K:Kitchen, E:Electronics. 3 Experiment To evaluate our approach, we conduct experiments on three representative NLP tasks: POS tagging, dependency parsing, and sentiment analysis. Details about the experiments are described as follows. 3.1 Datasets Two popular datasets are used in our experiments. For POS tagging and dependency parsing, we use the dataset from the SANCL 2012 shared task (Petrov and McDonald, 2012), with six different domains. For sentiment analysis, we use the product review dataset from (Blitzer et al., 2007b), with four domains. Note that for all datasets, there exists both labeled and unlabeled samples in each domain. The statistics and the domains for the aforementioned datasets are reported in Table 1. 3.2 Settings A major difference between our approach and other data selection methods is that the threshold (number of instances to be selected), n, is not fixed in our approach. Instead, it chooses the most effective ones automatically. For fair comparison, we record the resulted n from our approach in different tasks and use it in other methods to guide their selection. In all experiments, we use a multi-source domain setting where the source domain includes all labeled data from the dataset except that for the target domain, i.e., we take turns selecting a domain as the target domain, and use the union of the rest as the source domain. The number of bags, N, is set separately for each dataset to ensure a uniform bag size of 1K samples. For the guidance set, we follow Ruder and Plank (2017) and randomly select half of the instances from all the test data in the target domain discarding their labels. Consider that the starting reward needs to be calculated from a reliable feature extractor, we adopt a “soft starting” before the regular training, were we pre-train the predictor on all source data for 2 epochs, then initialize parameters of SDG with A EM N R WB WSJ JS-E 93.16 93.77 94.29 93.32 94.92 94.08 JS-D 92.25 93.43 93.54 92.84 94.45 93.32 T-S 93.59 94.65 94.76 93.92 95.32 94.44 TO-S 93.36 94.65 94.43 94.65 94.03 94.22 T+TO-S 94.33 92.55 93.96 93.94 94.51 94.98 T-S+D 93.64 94.21 93.57 93.86 95.33 93.84 TO-S+D 94.02 94.33 94.62 94.19 94.93 94.67 RANDOM 92.76 93.43 93.75 92.62 93.53 92.68 ALL 95.16 95.90 95.90 95.03 95.79 95.64 SDG (JS) 95.37 95.45 96.23 95.64 96.19 95.74 SDG (MMD) 95.75 96.23 96.40 95.51 96.95 96.12 SDG (R´ENYI) 95.52 96.31 96.62 95.97 96.75 96.35 SDG (LOSS) 95.46 95.77 95.92 95.50 96.03 95.82 Table 2: POS tagging results (accuracy %). Gaussian variables. Afterwards the predictor and SDG follow ordinary learning paradigm in each training epoch. In all experiments, we use Adam (Kingma and Ba, 2014) as the optimizer, and set γ to 0.99 following Fan et al. (2017) and nJ to 3. 3.3 POS tagging The Predictor We use the Bi-LSTM tagger proposed in Plank et al. (2016) as the predictor. Baselines Following Ruder and Plank (2017), we compare our approach to five baselines: 1) JS-E: top instances selected according to Jensen-Shannon divergence. 2) JS-D: top instances selected from the most similar source domain, where the similarity between domains are determined by JensenShannon divergence. 3) Bayesian optimization (Brochu et al., 2010) with the following settings: T-S, term distribution similarity; TO-S, topic distribution similarity; T+TO-S, joint term and topic distribution similarity; T-S+D, term distribution similarity and diversity; TO-S+D, topic distribution similarity and diversity. 5) RANDOM: a random selection model that selects the same number of instances with the n given by our approach. 6) ALL: The predictor is trained on all source data. Results POS tagging results are reported in Table 2. Overall, our approach with different distri1962 A EM N R WB WSJ JS-E 81.02 80.53 83.25 84.66 85.36 82.43 JS-D 82.80 79.93 81.77 83.98 83.44 80.61 T-S 83.79 81.09 82.68 84.66 84.85 82.57 TO-S 82.87 81.43 82.07 83.98 84.98 82.90 T+TO-S 82.87 81.13 82.97 84.65 84.43 82.43 T-S+D 83.72 81.60 82.80 84.62 85.44 82.87 TO-S+D 82.60 80.83 84.04 84.45 85.89 82.33 RANDOM 81.28 83.41 81.03 82.67 82.46 80.74 ALL 85.65 87.78 86.07 87.27 85.51 85.56 SDG (JS) 84.03 85.98 84.17 86.25 86.22 85.24 SDG (MMD) 84.19 86.25 84.87 86.80 85.57 84.37 SDG (R´ENYI) 84.55 85.11 85.27 86.93 85.65 85.79 SDG (LOSS) 83.97 85.86 84.05 86.21 86.03 84.98 Table 3: Dependency parsing results (LAS). bution discrepancy metrics outperforms all baselines based on the same predictor. This observation demonstrates the excellent adaptability of our approach in this task although there is complicated structural variance in sentences. Among the four metrics, R´enyi divergence achieve the best overall performance, which is slightly surpassed by MMD in the ANSWER and WEBLOGS domain. We observe around 50 epochs of training to reach convergence of our approach. As a result, 50% training data in the source domain are selected. 3.4 Dependency Parsing The Predictor The Bi-LSTM parser proposed by Kiperwasser and Goldberg (2016) is the predictor. Baselines For dependency parsing, we use the same baselines introduced in the POS tagging task. Results The performance (labeled attachment scores, LAS) of dependency parsing is reported in Table 3. Similar to POS tagging, the term distribution-based method (T-S) as well as its combination with diversity features (T-S+D) outperform other Bayesian optimization baselines. Our models are also shown to be superior than measurement-based as well as neural models significantly in most domains. However, different from POS tagging, in this task, the predictor trained on the entire source data still performs the best on some domains, which can be explained by the complexity of the task. To precisely predict structured parsing results, in spite of noise from different domains, large amount of data might be more helpful because various contextual information is beneficial in text representation learning (Song et al., 2018). In this case, selection based methods sacrifice accuracy for their efficiency with less data. B D E K JS-E 72.49 68.21 76.78 77.54 JS-D 75.28 73.75 72.53 80.05 T-S 75.39 76.27 81.91 83.41 TO-S 76.07 75.92 81.69 83.06 T+TO-S 75.75 76.62 81.74 83.39 T-S+D 76.20 77.60 82.66 84.98 TO-S+D 77.16 79.00 81.92 84.29 SCL 74.57 76.30 78.93 82.07 SST 76.32 78.77 83.57 85.19 DAM 75.61 77.57 82.79 84.23 SDAMS-LS 77.95 78.80 83.98 85.96 SDAMS-SVM 77.86 79.02 84.18 85.78 RANDOM 76.78 75.28 78.25 82.27 ALL 78.48 79.68 80.58 84.50 SDG (JS) 79.37 81.06 82.38 85.78 SDG (MMD) 79.57 81.08 82.68 85.69 SDG (R´ENYI) 80.07 82.07 82.28 86.18 SDG (LOSS) 79.57 80.58 81.88 85.08 Table 4: Sentiment analysis results (accuracy %). Yet, our models, e.g., the SDG (JS) and SDG (R´ENYI), outperform the ALL model in the last two domains, with only half of the source domain data used. We observe that averagely 60 epochs of training is required to obtain the best model. 3.5 Sentiment Analysis The Predictor We adopt the CNN classifier proposed by Kim (2014) as the predictor in this task. Baselines In addition to the baselines for POS tagging and dependency parsing, we use a series of extra baselines from previous studies: 1) SCL, the structural correspondence learning proposed by Blitzer et al. (2006); 2) SST, the sentiment sensitive thesaurus method (Bollegala et al., 2011); 3) DAM, a general-purpose multi-source domain adaptation method proposed by Mansour et al. (2008); 4) SDAMS-LS and SDAMS-SVM, the specially designed sentiment domain adaptation approach (Wu and Huang, 2016) for multiple sources with square loss and hinge loss, respectively. Results Table 4 presents the results for sentiment analysis. Similar to previous tasks, it is observed that our approach still performs well in this task, even though compared with the algorithms particularly designed for sentiment analysis (e.g., SDAMS). A potential reason for our weaker results on ELECTRONICS domain is that SDAMS methods use relation graphs among key words as prior knowledge, while our model does not need that and aims for a wider application without such task-specific consideration. Slightly different from previous tasks, in this task, around 40% source data 1963 (a) Accuracies against training epochs. (b) % data selected against training epochs. Figure 2: Investigation curves of using different models on the DVD domain for sentiment analysis. are selected upon the convergence of our approach with around 15 epochs of training. Note that, although there exist other recent domain adaptation methods exclusively designed for sentiment analysis (Barnes et al., 2018; Ziser and Reichart, 2018) with stronger results, their setting mainly focused on single source domain adaptation. Thus they are not directly compared with our models and baselines in Table 4, which is for a more general and challenging setting with multiple source domains. 3.6 Discussion In all three tasks, our approach achieve the best overall performance when there are half or less than half source domain data selected to train the predictor. The comparisons between our approach and the basic distribution measure-based methods, the general-purpose multi-source approach as well as models from previous studies (in sentiment analysis) across all tasks illustrate the superiority of our approach in selecting the most useful instances for the target domain while eliminating negative effects. However, domain variance is task-specific and still plays an important role affecting model performance. Compared to POS tagging and dependency parsing, in sentiment analysis, there exists more significant bias across domains, e.g., words such as “small” and “cheap” could be positive in one domain but negative in another. As a result, topic relevant domains express similar sentiment expressions. The investigation on the selected data indicates that our approach chooses more instances from the similar domains in sentiment analysis (e.g., BOOK ⇒DVD), while the selected instances in POS tagging and dependency parsing are more balanced across domains. This observation suggests the effectiveness of our approach in adapting different tasks with the most appropriate strategy. Yet, in addition, there still exist side effects on noise filtering and relevant instance selection, which can be observed from the slightly weaker results on ELECTRONICS domain in sentiment analysis as well as the fact that our approach is outperformed by training on all source data (in some domains) in parsing task. Such phenomenon implies that filtering irrelevant instances may lose intrinsic beneficial information for the target domain. Moreover, policy gradient method with partial data may sometimes converges to a local optima when learning on structured data because there exist many indirect relations among the learning instances. 4 Ablation Studies 4.1 Performance and Efficiency Analysis To better understand the behavior of our model with different measurements, we investigate their performance through a case study on the DVD domain in sentiment analysis. We draw accuracy curves of different models with respect to their training epochs, as shown in Figure 2(a). In general, our models present similar performance and are significantly better than the RANDOM one. Interestingly, their curves are similar to the ALL model but show a much stable fluctuation with epoch increasing. This observation demonstrates that there exist noise when directly using all source data, while our models are able to overcome such limitation. Another investigation is to study how much data are selected by different variants of our model. We display the number of instances selected by the four measurements in Figure 2(b), using the same do1964 (a) before training (b) ALL (c) JS-E (d) SDG (R´ENYI) Figure 3: t-SNE visualization of features (data representations) from the feature extractor in different scenarios for sentiment analysis on the DVD domain. Red cross, blue triangle, green star, and orange circle symbols represent samples from DVD, BOOKS, ELECTRONICS, and KITCHEN domain, respectively. main and task setting as that in Figure 2(a). Overall our models with different measurements share similar behavior in selecting source data in terms of selection numbers. They tend to select more data at the beginning stage, i.e., before 10 epochs, then reduce selected instances to a smaller set and maintain the performance of the predictor (comparing with curves in Figure 2(a)). Among all measurements, R´enyi divergence tend to select less data while achieving a better performance when matching its curve with the results reported in Table 4. In addition, we perform an early stop when the decrement of the training error falls below a preset threshold. Alternatively, to avoid over-selection, one can follow Klein et al. (2017) to predict the development of the performance curve so that TDS can be done more efficiently in fewer epochs. 4.2 Distribution Visualization To better demonstrate the effectiveness of our approach, we still use the sentiment analysis for the DVD domain with SDG (R´ENYI) for visualized comparison among the distributions of the features (data representations) in different scenarios. Following Tzeng et al. (2014), we plot in Figure 3 the t-SNE visualizations of the features learned from the feature extractor in four settings: features before training (initialized weights) (Figure 3(a)), directly trained on all source data (Figure 3(b)), trained with JS-E (Figure 3(c)), and trained with SDG (R´enyi) (Figure 3(d)). It is observed that, for original features, DVD and BOOKS are similar, 1965 while ELECTRONICS and KITCHEN are different from them as well as to each other. When trained with all source data, features are visualized with some changes in their distributions where instances from different domains are mixed and closer to the target domain. In the case where JS divergence is minimized for each instance, we can see a further mixture with closer representation matching. The figures indicate that, for both ALL and JS-E models, their domain adaptation ability is limited since the learned representations are not optimized for the target domain. On the contrary, when trained with our approach, the selected instances result in a highly similar distribution as that in the target domain (Figure 3(d)), with matched shape between the points in red and other colors. Such visualization confirms that our TDS framework not only selects the most appropriate instances (similar in the distribution shape), but also learns better representations (located at the similar positions of target domain instances) for them with respect to the target domain, which further illustrates the validity and effectiveness of joint selecting and learning from training instances for domain adaptation. 5 Related Work Many studies have been conducted recently for domain adaptation with neural networks (Long et al., 2015, 2017; Shu et al., 2018; Shankar et al., 2018). Their methodologies follow several mainstreams such as representation learning (Glorot et al., 2011; Chen et al., 2012; Baktashmotlagh et al., 2013; Song and Shi, 2018; Zhao et al., 2017), reweighing samples from the source domain (Borgwardt et al., 2006; Daum´e III, 2007; Song and Xia, 2013), and feature space transformation (Gopalan et al., 2011; Pan et al., 2011; Long et al., 2013), etc. Normally, the transferable knowledge across domains are derived from some certain data, while others contribute less and are costly to be learned from (Axelrod et al., 2011; Ruder and Plank, 2017). Thus, previous studies conduct domain adaptation through selecting relative and informative source data according to the nature of the target domain, via entropy-based methods (Song et al., 2012), Bayesian optimization (Ruder and Plank, 2017), etc. Particularly for NLP, TDS are proved to be effective in various tasks, such as in language modeling (Moore and Lewis, 2010), word segmentation (Song and Xia, 2012; Song et al., 2012), machine translation (Chen et al., 2016; van der Wees et al., 2017), and multilingual NER (Murthy et al., 2018). Recently, RL and representation learning provided new possibilities for TDS. For example, Fan et al. (2017) proposed to allocate appropriate training data at different training stages, which helps achieving comparative accuracy with less computational efforts compared with the model trained on the entire data. Feng et al. (2018) used sequential one-step actions for each single instance where every action is decided based on the previous one. As a result, their selection becomes a consuming process where the complexity is determined by the amount of the source data. For representation learning based approaches, there are studies such as Mansour et al. (2008); Gopalan et al. (2014); Pei et al. (2018) that adapted representations across domains, which is a widely adopted strategy for domain adaptation on neural models. Moreover, a similar work (Dong and Xing, 2018) adopted reinforced sampling strategy specifically for one-shot scenarios. Compared to aforementioned previous work, the proposed approach in this paper combines TDS and transferable representation learning in a unified RL framework, and is conducted in an effective way using data batches. 6 Conclusion In this paper, we proposed a general TDS framework for domain adaptation via reinforcement learning, which matches the representations of the selected data from the source domain and the guidance set from the target domain and pass the similarity at different steps as rewards to guide a selection distribution generator. Through the generator, different instances from the source domain are selected to train a task-specific predictor. To this end, not only those data relevant to the target domain are selected, but also task- and domain-specific representations are learned for them. Experimental results from three NLP tasks, i.e., POS tagging, dependency parsing, and sentiment analysis, demonstrate that our models outperform various baselines across domains, especially (in most cases) the same predictor trained on all source data. Ablation studies on model convergence, selection numbers, as well as distribution visualizations further confirmed the validity and effectiveness of our approach. References Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain Adaptation via Pseudo in-domain 1966 Data Selection. In Proceedings of the conference on empirical methods in natural language processing, pages 355–362. Mahsa Baktashmotlagh, Mehrtash Tafazzoli Harandi, Brian C. Lovell, and Mathieu Salzmann. 2013. Unsupervised domain adaptation by domain invariant projection. In IEEE International Conference on Computer Vision, ICCV 2013, Sydney, Australia, December 1-8, 2013, pages 769–776. Jeremy Barnes, Roman Klinger, and Sabine Schulte im Walde. 2018. Projecting embeddings for domain adaption: Joint modeling of sentiment analysis in diverse domains. arXiv preprint arXiv:1806.04381. John Blitzer, Mark Dredze, and Fernando Pereira. 2007a. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic. John Blitzer, Mark Dredze, and Fernando Pereira. 2007b. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. In Proceedings of the 45th annual meeting of the association of computational linguistics, pages 440–447. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain Adaptation with Structural Correspondence Learning. In Proceedings of the 2006 conference on empirical methods in natural language processing, pages 120–128. Danushka Bollegala, David J. Weir, and John A. Carroll. 2011. Using multiple sources to construct a sentiment sensitive thesaurus for cross-domain sentiment classification. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA, pages 132–141. Karsten M. Borgwardt, Arthur Gretton, Malte J. Rasch, Hans-Peter Kriegel, Bernhard Sch¨olkopf, and Alexander J. Smola. 2006. Integrating Structured Biological Data by Kernel Maximum Mean Discrepancy. In Proceedings 14th International Conference on Intelligent Systems for Molecular Biology 2006, Fortaleza, Brazil, August 6-10, 2006, pages 49–57. Eric Brochu, Vlad M Cora, and Nando De Freitas. 2010. A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599. Boxing Chen, Roland Kuhn, George Foster, Colin Cherry, and Fei Huang. 2016. Bilingual Methods for Adaptive Training Data Selection for Machine Translation. In Proc. of AMTA, pages 93–103. Minmin Chen, Zhixiang Eddie Xu, Kilian Q. Weinberger, and Fei Sha. 2012. Marginalized denoising autoencoders for domain adaptation. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012. Gabriela Csurka, editor. 2017. Domain Adaptation in Computer Vision Applications. Advances in Computer Vision and Pattern Recognition. Springer. Hal Daum´e III. 2007. Frustratingly Easy Domain Adaptation. In ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic. Nanqing Dong and Eric P Xing. 2018. Domain adaption in one-shot learning. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 573–588. Springer. Yang Fan, Fei Tian, Tao Qin, Jiang Bian, and TieYan Liu. 2017. Learning what data to learn. arXiv preprint, abs/1702.08635. Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xiaoyan Zhu. 2018. Reinforcement learning for relation classification from noisy data. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, USA, February 2-7, 2018. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, pages 513–520. Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. 2011. Domain Adaptation for Object Recognition: An Unsupervised Approach. In IEEE International Conference on Computer Vision, ICCV 2011, Barcelona, Spain, November 6-13, 2011, pages 999–1006. Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. 2014. Unsupervised Adaptation Across Domain Shifts by Generating Intermediate Data Representations. IEEE transactions on pattern analysis and machine intelligence, 36(11):2288–2302. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar. Diederik P Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. TACL, 4:313– 327. 1967 Aaron Klein, Stefan Falkner, Jost Tobias Springenberg, and Frank Hutter. 2017. Learning Curve Prediction with Bayesian Neural Networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017. Jianhua Lin. 1991. Divergence Measures Based on the Shannon Entropy. IEEE Trans. Information Theory, 37(1):145–151. Miaofeng Liu, Jialong Han, Haisong Zhang, and Yan Song. 2018. Domain Adaptation for Disease Phrase Matching with Adversarial Networks. In Proceedings of the BioNLP 2018 workshop, pages 137–141, Melbourne, Australia. Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I Jordan. 2015. Learning Transferable Features with Deep Adaptation Networks. arXiv preprint arXiv:1502.02791. Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, and Philip S Yu. 2013. Transfer feature learning with joint distribution adaptation. In Proceedings of the IEEE international conference on computer vision, pages 2200–2207. Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I. Jordan. 2017. Deep Transfer Learning with Joint Adaptation Networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pages 2208–2217, Sydney, Australia. Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. 2008. Domain adaptation with multiple sources. In Advances in Neural Information Processing Systems 21, Proceedings of the TwentySecond Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 8-11, 2008, pages 1041–1048. Robert C Moore and William Lewis. 2010. Intelligent Selection of Language Model Training Data. In Proceedings of the ACL 2010 conference short papers, pages 220–224. Association for Computational Linguistics. Rudra Murthy, Anoop Kunchukuttan, and Pushpak Bhattacharyya. 2018. Judicious Selection of Training Data in Assisting Language for Multilingual Neural NER. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, volume 2, pages 401–406. Sinno Jialin Pan, Ivor W. Tsang, James T. Kwok, and Qiang Yang. 2011. Domain adaptation via transfer component analysis. IEEE Trans. Neural Networks, 22(2):199–210. Zhongyi Pei, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. 2018. Multi-adversarial Domain Adaptation. In Thirty-Second AAAI Conference on Artificial Intelligence. Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. In Notes of the first workshop on syntactic analysis of noncanonical language (sancl), volume 59. Citeseer. Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. arXiv preprint arXiv:1604.05529. Barbara Plank and Gertjan Van Noord. 2011. Effective measures of domain similarity for parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1566–1576. Alfr´ed R´enyi. 1961. On measures of entropy and information. Technical report, HUNGARIAN ACADEMY OF SCIENCES Budapest Hungary. Michael T Rosenstein, Zvika Marx, Leslie Pack Kaelbling, and Thomas G Dietterich. 2005. To transfer or not to transfer. In NIPS 2005 workshop on transfer learning, volume 898, pages 1–4. Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with Bayesian Optimization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 372–382, Copenhagen, Denmark. Sebastian Ruder and Barbara Plank. 2018. Strong Baselines for Neural Semi-Supervised Learning under Domain Shift. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 1044–1054, Melbourne, Australia. Shiv Shankar, Vihari Piratla, Soumen Chakrabarti, Siddhartha Chaudhuri, Preethi Jyothi, and Sunita Sarawagi. 2018. Generalizing across domains via cross-gradient training. arXiv preprint arXiv:1804.10745. Rui Shu, Hung H Bui, Hirokazu Narui, and Stefano Ermon. 2018. A dirt-t approach to unsupervised domain adaptation. arXiv preprint arXiv:1802.08735. Anders Søgaard. 2011. Data point selection for crosslanguage adaptation of dependency parsers. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 682–686, Portland, Oregon, USA. Yan Song, Prescott Klassen, Fei Xia, and Chunyu Kit. 2012. Entropy-based Training Data Selection for Domain Adaptation. In Proceedings of the 24th International Conference on Computational Linguistics, pages 1191–1200, Mumbai, India. Yan Song and Shuming Shi. 2018. Complementary Learning of Word Embeddings. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 4368– 4374. 1968 Yan Song, Shuming Shi, Jing Li, and Haisong Zhang. 2018. Directional Skip-Gram: Explicitly Distinguishing Left and Right Context for Word Embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 175–180, New Orleans, Louisiana. Yan Song and Fei Xia. 2012. Using a Goodness Measurement for Domain Adaptation: A Case Study on Chinese Word Segmentation. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 3853– 3860, Istanbul, Turkey. Yan Song and Fei Xia. 2013. A Common Case of Jekyll and Hyde: The Synergistic Effect of Using Divided Source Training Data for Feature Augmentation. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 623–631, Nagoya, Japan. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15(1):1929–1958. Richard S. Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems 12, [NIPS Conference, Denver, Colorado, USA, November 29 - December 4, 1999], pages 1057–1063. Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv preprint, abs/1412.3474. Vincent Van Asch and Walter Daelemans. 2010. Using domain similarity for performance estimation. In Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing, pages 31– 36. Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2017. Dynamic Data Selection for Neural Machine Translation. arXiv preprint arXiv:1708.00712. Fangzhao Wu and Yongfeng Huang. 2016. Sentiment Domain Adaptation with Multiple Sources. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 301–310. Han Zhao, Shanghang Zhang, Guanhang Wu, Jo˜ao P. Costeira, Jos´e M. F. Moura, and Geoffrey J. Gordon. 2017. Multiple Source Domain Adaptation with Adversarial Training of Neural Networks. arXiv preprint, abs/1705.09684. Yftah Ziser and Roi Reichart. 2018. Pivot based Language Modeling for Improved Neural Domain Adaptation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 1241–1251.
2019
189
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 194–203 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 194 An Effective Approach to Unsupervised Machine Translation Mikel Artetxe, Gorka Labaka, Eneko Agirre IXA NLP Group University of the Basque Country (UPV/EHU) {mikel.artetxe, gorka.labaka, e.agirre}@ehu.eus Abstract While machine translation has traditionally relied on large amounts of parallel corpora, a recent research line has managed to train both Neural Machine Translation (NMT) and Statistical Machine Translation (SMT) systems using monolingual corpora only. In this paper, we identify and address several deficiencies of existing unsupervised SMT approaches by exploiting subword information, developing a theoretically well founded unsupervised tuning method, and incorporating a joint refinement procedure. Moreover, we use our improved SMT system to initialize a dual NMT model, which is further fine-tuned through onthe-fly back-translation. Together, we obtain large improvements over the previous stateof-the-art in unsupervised machine translation. For instance, we get 22.5 BLEU points in English-to-German WMT 2014, 5.5 points more than the previous best unsupervised system, and 0.5 points more than the (supervised) shared task winner back in 2014. 1 Introduction The recent advent of neural sequence-to-sequence modeling has resulted in significant progress in the field of machine translation, with large improvements in standard benchmarks (Vaswani et al., 2017; Edunov et al., 2018) and the first solid claims of human parity in certain settings (Hassan et al., 2018). Unfortunately, these systems rely on large amounts of parallel corpora, which are only available for a few combinations of major languages like English, German and French. Aiming to remove this dependency on parallel data, a recent research line has managed to train unsupervised machine translation systems using monolingual corpora only. The first such systems were based on Neural Machine Translation (NMT), and combined denoising autoencoding and back-translation to train a dual model initialized with cross-lingual embeddings (Artetxe et al., 2018c; Lample et al., 2018a). Nevertheless, these early systems were later superseded by Statistical Machine Translation (SMT) based approaches, which induced an initial phrase-table through cross-lingual embedding mappings, combined it with an n-gram language model, and further improved the system through iterative backtranslation (Lample et al., 2018b; Artetxe et al., 2018b). In this paper, we develop a more principled approach to unsupervised SMT, addressing several deficiencies of previous systems by incorporating subword information, applying a theoretically well founded unsupervised tuning method, and developing a joint refinement procedure. In addition to that, we use our improved SMT approach to initialize an unsupervised NMT system, which is further improved through on-the-fly back-translation. Our experiments on WMT 2014/2016 FrenchEnglish and German-English show the effectiveness of our approach, as our proposed system outperforms the previous state-of-the-art in unsupervised machine translation by 5-7 BLEU points in all these datasets and translation directions. Our system also outperforms the supervised WMT 2014 shared task winner in English-to-German, and is around 2 BLEU points behind it in the rest of translation directions, suggesting that unsupervised machine translation can be a usable alternative in practical settings. The remaining of this paper is organized as follows. Section 2 first discusses the related work in the topic. Section 3 then describes our principled unsupervised SMT method, while Section 4 discusses our hybridization method with NMT. We then present the experiments done and the results obtained in Section 5, and Section 6 concludes the paper. 195 2 Related work Early attempts to build machine translation systems with monolingual corpora go back to statistical decipherment (Ravi and Knight, 2011; Dou and Knight, 2012). These methods see the source language as ciphertext produced by a noisy channel model that first generates the original English text and then probabilistically replaces the words in it. The English generative process is modeled using an n-gram language model, and the channel model parameters are estimated using either expectation maximization or Bayesian inference. This basic approach was later improved by incorporating syntactic knowledge (Dou and Knight, 2013) and word embeddings (Dou et al., 2015). Nevertheless, these methods were only shown to work in limited settings, being most often evaluated in word-level translation. More recently, the task got a renewed interest after the concurrent work of Artetxe et al. (2018c) and Lample et al. (2018a) on unsupervised NMT which, for the first time, obtained promising results in standard machine translation benchmarks using monolingual corpora only. Both methods build upon the recent work on unsupervised cross-lingual embedding mappings, which independently train word embeddings in two languages and learn a linear transformation to map them to a shared space through self-learning (Artetxe et al., 2017, 2018a) or adversarial training (Conneau et al., 2018). The resulting crosslingual embeddings are used to initialize a shared encoder for both languages, and the entire system is trained using a combination of denoising autoencoding, back-translation and, in the case of Lample et al. (2018a), adversarial training. This method was further improved by Yang et al. (2018), who use two language-specific encoders sharing only a subset of their parameters, and incorporate a local and a global generative adversarial network. Concurrent to our work, Lample and Conneau (2019) report strong results initializing an unsupervised NMT system with a cross-lingual language model. Following the initial work on unsupervised NMT, it was argued that the modular architecture of phrase-based SMT was more suitable for this problem, and Lample et al. (2018b) and Artetxe et al. (2018b) adapted the same principles discussed above to train an unsupervised SMT model, obtaining large improvements over the original unsupervised NMT systems. More concretely, both approaches learn cross-lingual n-gram embeddings from monolingual corpora based on the mapping method discussed earlier, and use them to induce an initial phrase-table that is combined with an n-gram language model and a distortion model. This initial system is then refined through iterative back-translation (Sennrich et al., 2016) which, in the case of Artetxe et al. (2018b), is preceded by an unsupervised tuning step. Our work identifies some deficiencies in these previous systems, and proposes a more principled approach to unsupervised SMT that incorporates subword information, uses a theoretically better founded unsupervised tuning method, and applies a joint refinement procedure, outperforming these previous systems by a substantial margin. Very recently, some authors have tried to combine both SMT and NMT to build hybrid unsupervised machine translation systems. This idea was already explored by Lample et al. (2018b), who aided the training of their unsupervised NMT system by combining standard back-translation with synthetic parallel data generated by unsupervised SMT. Marie and Fujita (2018) go further and use synthetic parallel data from unsupervised SMT to train a conventional NMT system from scratch. The resulting NMT model is then used to augment the synthetic parallel corpus through backtranslation, and a new NMT model is trained on top of it from scratch, repeating the process iteratively. Ren et al. (2019) follow a similar approach, but use SMT as posterior regularization at each iteration. As shown later in our experiments, our proposed NMT hybridization obtains substantially larger absolute gains than all these previous approaches, even if our initial SMT system is stronger and thus more challenging to improve upon. 3 Principled unsupervised SMT Phrase-based SMT is formulated as a log-linear combination of several statistical models: a translation model, a language model, a reordering model and a word/phrase penalty. As such, building an unsupervised SMT system requires learning these different components from monolingual corpora. As it turns out, this is straightforward for most of them: the language model is learned from monolingual corpora by definition; the word and phrase penalties are parameterless; and one 196 can drop the standard lexical reordering model at a small cost and do with the distortion model alone, which is also parameterless. This way, the main challenge left is learning the translation model, that is, building the phrase-table. Our proposed method starts by building an initial phrase-table through cross-lingual embedding mappings (Section 3.1). This initial phrase-table is then extended by incorporating subword information, addressing one of the main limitations of previous unsupervised SMT systems (Section 3.2). Having done that, we adjust the weights of the underlying log-linear model through a novel unsupervised tuning procedure (Section 3.3). Finally, we further improve the system by jointly refining two models in opposite directions (Section 3.4). 3.1 Initial phrase-table So as to build our initial phrase-table, we follow Artetxe et al. (2018b) and learn n-gram embeddings for each language independently, map them to a shared space through self-learning, and use the resulting cross-lingual embeddings to extract and score phrase pairs. More concretely, we train our n-gram embeddings using phrase2vec1, a simple extension of skip-gram that applies the standard negative sampling loss of Mikolov et al. (2013) to bigramcontext and trigram-context pairs in addition to the usual word-context pairs.2 Having done that, we map the embeddings to a cross-lingual space using VecMap3 with identical initialization (Artetxe et al., 2018a), which builds an initial solution by aligning identical words and iteratively improves it through self-learning. Finally, we extract translation candidates by taking the 100 nearestneighbors of each source phrase, and score them by applying the softmax function over their cosine similarities: φ( ¯f|¯e) = exp cos(¯e, ¯f)/τ  P ¯f′ exp cos(¯e, ¯f′)/τ  where the temperature τ is estimated using maximum likelihood estimation over a dictionary induced in the reverse direction. In addition to the phrase translation probabilities in both directions, the forward and reverse lexical weightings 1https://github.com/artetxem/ phrase2vec 2So as to keep the model size within a reasonable limit, we restrict the vocabulary to the most frequent 200,000 unigrams, 400,000 bigrams and 400,000 trigrams. 3https://github.com/artetxem/vecmap are also estimated by aligning each word in the target phrase with the one in the source phrase most likely generating it, and taking the product of their respective translation probabilities. The reader is referred to Artetxe et al. (2018b) for more details. 3.2 Adding subword information An inherent limitation of existing unsupervised SMT systems is that words are taken as atomic units, making it impossible to exploit characterlevel information. This is reflected in the known difficulty of these models to translate named entities, as it is very challenging to discriminate among related proper nouns based on distributional information alone, yielding to translation errors like “Sunday Telegraph” →“The Times of London” (Artetxe et al., 2018b). So as to overcome this issue, we propose to incorporate subword information once the initial alignment is done at the word/phrase level. For that purpose, we add two additional weights to the initial phrase-table that are analogous to the lexical weightings, but use a character-level similarity function instead of word translation probabilities: score( ¯f|¯e) = Y i max  ϵ, max j sim( ¯fi, ¯ej)  where ϵ = 0.3 guarantees a minimum similarity score, as we want to favor translation candidates that are similar at the character level without excessively penalizing those that are not. In our case, we use a simple similarity function that normalizes the Levenshtein distance lev(·) (Levenshtein, 1966) by the length of the words len(·): sim(f, e) = 1 − lev(f, e) max(len(f), len(e)) We leave the exploration of more elaborated similarity functions and, in particular, learnable metrics (McCallum et al., 2005), for future work. 3.3 Unsupervised tuning Having trained the underlying statistical models independently, SMT tuning aims to adjust the weights of their resulting log-linear combination to optimize some evaluation metric like BLEU in a parallel validation corpus, which is typically done through Minimum Error Rate Training or MERT (Och, 2003). Needless to say, this cannot be done in strictly unsupervised settings, but we argue that 197 it would still be desirable to optimize some unsupervised criterion that is expected to correlate well with test performance. Unfortunately, neither of the existing unsupervised SMT systems do so: Artetxe et al. (2018b) use a heuristic that builds two initial models in opposite directions, uses one of them to generates a synthetic parallel corpus through back-translation (Sennrich et al., 2016), and applies MERT to tune the model in the reverse direction, iterating until convergence, whereas Lample et al. (2018b) do not perform any tuning at all. In what follows, we propose a more principled approach to tuning that defines an unsupervised criterion and an optimization procedure that is guaranteed to converge to a local optimum of it. Inspired by the previous work on CycleGANs (Zhu et al., 2017) and dual learning (He et al., 2016), our method takes two initial models in opposite directions, and defines an unsupervised optimization objective that combines a cyclic consistency loss and a language model loss over the two monolingual corpora E and F: L = Lcycle(E) + Lcycle(F) + Llm(E) + Llm(F) The cyclic consistency loss captures the intuition that the translation of a translation should be close to the original text. So as to quantify this, we take a monolingual corpus in the source language, translate it to the target language and back to the source language, and compute its BLEU score taking the original text as reference: Lcycle(E) = 1 −BLEU(TF→E(TE→F (E)), E) At the same time, the language model loss captures the intuition that machine translation should produce fluent text in the target language. For that purpose, we estimate the per-word entropy in the target language corpus using an n-gram language model, and penalize higher per-word entropies in machine translated text as follows:4 Llm(E) = LP · max(0, H(F) −H(TE→F (E)))2 4We initially tried to directly minimize the entropy of the generated text, but this worked poorly in our preliminary experiments on English-Spanish (note that we used this language pair exclusively for development to be faithful to our unsupervised scenario at test time). More concretely, the behavior of the optimization algorithm was very unstable, as it tended to excessively focus on either the cyclic consistency loss or the language model loss at the cost of the other, and we found it very difficult to find the right balance between the two factors. where the length penalty LP = LP(E) · LP(F) penalizes excessively long translations:5 LP(E) = max  1, len(TF→E(TE→F (E))) len(E)  So as to minimize the combined loss function, we adapt MERT to jointly optimize the parameters of the two models. In its basic form, MERT approximates the search space for each source sentence through an n-best list, and performs a form of coordinate descent by computing the optimal value for each parameter through an efficient line search method and greedily taking the step that leads to the largest gain. The process is repeated iteratively until convergence, augmenting the n-best list with the updated parameters at each iteration so as to obtain a better approximation of the full search space. Given that our optimization objective combines two translation systems TF→E(TE→F (E)), this would require generating an n-best list for TE→F (E) first and, for each entry on it, generating a new n-best list with TF→E, yielding a combined n-best list with N2 entries. So as to make it more efficient, we propose an alternating optimization approach where we fix the parameters of one model and optimize the other with standard MERT. Thanks to this, we do not need to expand the search space of the fixed model, so we can do with an n-best list of N entries alone. Having done that, we fix the parameters of the opposite model and optimize the other, iterating until convergence. 3.4 Joint refinement Constrained by the lack of parallel corpora, the procedure described so far makes important simplifications that could compromise its potential performance: its phrase-table is somewhat unnatural (e.g. the translation probabilities are estimated from cross-lingual embeddings rather than actual frequency counts) and it lacks a lexical reordering model altogether. So as to overcome this issue, existing unsupervised SMT methods generate a synthetic parallel corpus through back-translation and use it to train a standard SMT system from scratch, iterating until convergence. 5Without this penalization, the system tended to produce unnecessary tokens (e.g. quotes) that looked natural in their context, which served to minimize the per-word perplexity of the output. Minimizing the overall perplexity instead of the per-word perplexity did not solve the problem, as the opposite phenomenon arose (i.e. the system tended to produce excessively short translations). 198 An obvious drawback of this approach is that the back-translated side will contain ungrammatical n-grams and other artifacts that will end up in the induced phrase-table. One could argue that this should be innocuous as long as the ungrammatical n-grams are in the source side, as they should never occur in real text and their corresponding entries in the phrase-table should therefore not be used. However, ungrammatical source phrases do ultimately affect the estimation of the backward translation probabilities, including those of grammatical phrases.6 We argue that, ultimately, the backward probability estimations can only be meaningful when all source phrases are grammatical (so the probabilities of all plausible translations sum to one) and, similarly, the forward probability estimations can only be meaningful when all target phrases are grammatical. Following the above observation, we propose an alternative approach that jointly refines both translation directions. More concretely, we use the initial systems to build two synthetic corpora in opposite directions.7 Having done that, we independently extract phrase pairs from each synthetic corpus, and build a phrase-table by taking their intersection. The forward probabilities are estimated in the parallel corpus with the synthetic source side, while the backward probabilities are estimated in the one with the synthetic target side. This does not only guarantee that the probability estimates are meaningful as discussed previously, but it also discards the ungrammatical phrases altogether, as both the source and the target n-grams must have occurred in the original monolingual texts to be present in the resulting phrase-table. This phrase-table is then combined with a lexical reordering model learned on the synthetic parallel corpus in the reverse direction, and we apply the unsupervised tuning method described in Section 3.3 to adjust the weights of the resulting system. We repeat this process for a total of 3 iterations.8 6For instance, let’s say that the target phrase “dos gatos” has been aligned 10 times with “two cats” and 90 times with “two cat”. While the ungrammatical phrase-table entry two cat- dos gatos should never be picked, the backward probability estimation of two cats - dos gatos is still affected by it (it would be 0.1 instead of 1.0 in this example). 7For efficiency purposes, we restrict the size of each synthetic parallel corpus to 10 million sentence pairs. 8For the last iteration, we do not perform any tuning and use default Moses weights instead, which we found to be more robust during development. Note, however, that using unsupervised tuning during the previous steps was still strongly beneficial. 4 NMT hybridization While the rigid and modular design of SMT provides a very suitable framework for unsupervised machine translation, NMT has shown to be a fairly superior paradigm in supervised settings, outperforming SMT by a large margin in standard benchmarks. As such, the choice of SMT over NMT also imposes a hard ceiling on the potential performance of these approaches, as unsupervised SMT systems inherit the very same limitations of their supervised counterparts (e.g. the locality and sparsity problems). For that reason, we argue that SMT provides a more appropriate architecture to find an initial alignment between the languages, but NMT is ultimately a better architecture to model the translation process. Following this observation, we propose a hybrid approach that uses unsupervised SMT to warm up a dual NMT model trained through iterative backtranslation. More concretely, we first train two SMT systems in opposite directions as described in Section 3, and use them to assist the training of another two NMT systems in opposite directions. These NMT systems are trained following an iterative process where, at each iteration, we alternately update the model in each direction by performing a single pass over a synthetic parallel corpus built through back-translation (Sennrich et al., 2016).9 In the first iteration, the synthetic parallel corpus is entirely generated by the SMT system in the opposite direction but, as training progresses and the NMT models get better, we progressively switch to a synthetic parallel corpus generated by the reverse NMT model. More concretely, iteration t uses Nsmt = N · max(0, 1 −t/a) synthetic parallel sentences from the reverse SMT system, where the parameter a controls the number of transition iterations from SMT to NMT back-translation. The remaining N −Nsmt sentences are generated by the reverse NMT model. Inspired by Edunov et al. (2018), we use greedy decoding for half of them, which produces more fluent and predictable translations, and random sampling for the other half, which produces more varied translations. In our experiments, we use N = 1, 000, 000 and a = 30, and perform a total of 60 such iterations. At test time, we use beam search decoding with an ensemble of all check9Note that we do not train a new model from scratch each time, but continue training the model from the previous iteration. 199 WMT-14 WMT-16 fr-en en-fr de-en en-de de-en en-de NMT Artetxe et al. (2018c) 15.6 15.1 10.2 6.6 Lample et al. (2018a) 14.3 15.1 13.3 9.6 Yang et al. (2018) 15.6 17.0 14.6 10.9 Lample et al. (2018b) 24.2 25.1 21.0 17.2 SMT Artetxe et al. (2018b) 25.9 26.2 17.4 14.1 23.1 18.2 Lample et al. (2018b) 27.2 28.1 22.9 17.9 Marie and Fujita (2018)∗ 20.2 15.5 Proposed system 28.4 30.1 20.1 15.8 25.4 19.7 detok. SacreBLEU∗ 27.9 27.8 19.7 14.7 24.8 19.4 SMT + NMT Lample et al. (2018b) 27.7 27.6 25.2 20.2 Marie and Fujita (2018)∗ 26.7 20.0 Ren et al. (2019) 28.9 29.5 20.4 17.0 26.3 21.7 Proposed system 33.5 36.2 27.0 22.5 34.4 26.9 detok. SacreBLEU∗ 33.2 33.6 26.4 21.2 33.8 26.4 Table 1: Results of the proposed method in comparison to previous work (BLEU). Overall best results are in bold, the best ones in each group are underlined. ∗Detokenized BLEU equivalent to the official mteval-v13a.pl script. The rest use tokenized BLEU with multi-bleu.perl (or similar). points from every 10 iterations. 5 Experiments and results In order to make our experiments comparable to previous work, we use the French-English and German-English datasets from the WMT 2014 shared task. More concretely, our training data consists of the concatenation of all News Crawl monolingual corpora from 2007 to 2013, which make a total of 749 million tokens in French, 1,606 millions in German, and 2,109 millions in English, from which we take a random subset of 2,000 sentences for tuning (Section 3.3). Preprocessing is done using standard Moses tools, and involves punctuation normalization, tokenization with aggressive hyphen splitting, and truecasing. Our SMT implementation is based on Moses10, and we use the KenLM (Heafield et al., 2013) tool included in it to estimate our 5-gram language model with modified Kneser-Ney smoothing. Our unsupervised tuning implementation is based on Z-MERT (Zaidan, 2009), and we use FastAlign (Dyer et al., 2013) for word alignment within the joint refinement procedure. Finally, we use the big transformer implementation from fairseq11 for our NMT system, training with a total batch size of 20,000 tokens across 8 GPUs with the exact same hyperparameters as Ott et al. (2018). We use newstest2014 as our test set for 10http://www.statmt.org/moses/ 11https://github.com/pytorch/fairseq French-English, and both newstest2014 and newstest2016 (from WMT 201612) for GermanEnglish. Following common practice, we report tokenized BLEU scores as computed by the multi-bleu.perl script included in Moses. In addition to that, we also report detokenized BLEU scores as computed by SacreBLEU13 (Post, 2018), which is equivalent to the official mteval-v13a.pl script. We next present the results of our proposed system in comparison to previous work in Section 5.1. Section 5.2 then compares the obtained results to those of different supervised systems. Finally, Section 5.3 presents some translation examples from our system. 5.1 Main results Table 1 reports the results of the proposed system in comparison to previous work. As it can be seen, our full system obtains the best published results in all cases, outperforming the previous stateof-the-art by 5-7 BLEU points in all datasets and translation directions. A substantial part of this improvement comes from our more principled unsupervised SMT ap12Note that it is only the test set that is from WMT 2016. All the training data comes from WMT 2014 News Crawl, so it is likely that our results could be further improved by using the more extensive monolingual corpora from WMT 2016. 13SacreBLEU signature: BLEU+case.mixed+lang.LANG +numrefs.1+smooth.exp+test.TEST+tok.13a+version.1.2.1 1, with LANG ∈{fr-en, en-fr, de-en, en-de} and TEST ∈ {wmt14/full, wmt16} 200 WMT-14 WMT-16 fr-en en-fr de-en en-de Lample et al. (2018b) Initial SMT 27.2 28.1 22.9 17.9 + NMT hybrid 27.7 (+0.5) 27.6 (-0.5) 25.2 (+2.3) 20.2 (+2.3) Marie and Fujita (2018) Initial SMT 20.2 15.5 + NMT hybrid 26.7 (+6.5) 20.0 (+4.5) Proposed system Initial SMT 28.4 30.1 25.4 19.7 + NMT hybrid 33.5 (+5.1) 36.2 (+6.1) 34.4 (+9.0) 26.9 (+7.2) Table 2: NMT hybridization results for different unsupervised machine translation systems (BLEU). WMT-14 fr-en en-fr de-en en-de Unsupervised Proposed system 33.5 36.2 27.0 22.5 detok. SacreBLEU∗ 33.2 33.6 26.4 21.2 Supervised WMT best∗ 35.0 35.8 29.0 20.6† Vaswani et al. (2017) 41.0 28.4 Edunov et al. (2018) 45.6 35.0 Table 3: Results of the proposed method in comparison to different supervised systems (BLEU). ∗Detokenized BLEU equivalent to the official mteval-v13a.pl script. The rest use tokenized BLEU with multi-bleu.perl (or similar). †Results in the original test set from WMT 2014, which slightly differs from the full test set used in all subsequent work. Our proposed system obtains 22.4 BLEU points (21.1 detokenized) in that same subset. proach, which outperforms all previous SMTbased systems by around 2 BLEU points. Nevertheless, it is the NMT hybridization that brings the largest gains, improving the results of this initial SMT systems by 5-9 BLEU points. As shown in Table 2, our absolute gains are considerably larger than those of previous hybridization methods, even if our initial SMT system is substantially better and thus more difficult to improve upon. This way, our initial SMT system is about 4-5 BLEU points above that of Marie and Fujita (2018), yet our absolute gain on top of it is around 2.5 BLEU points higher. When compared to Lample et al. (2018b), we obtain an absolute gain of 56 BLEU points in both French-English directions while they do not get any clear improvement, and we obtain an improvement of 7-9 BLEU points in both German-English directions, in contrast with the 2.3 BLEU points they obtain. More generally, it is interesting that pure SMT systems perform better than pure NMT systems, yet the best results are obtained by initializing an NMT system with an SMT system. This suggests that the rigid and modular architecture of SMT might be more suitable to find an initial alignment between the languages, but the final system should be ultimately based on NMT for optimal results. 5.2 Comparison with supervised systems So as to put our results into perspective, Table 3 reports the results of different supervised systems in the same WMT 2014 test set. More concretely, we include the best results from the shared task itself, which reflect the state-of-the-art in machine translation back in 2014; those of Vaswani et al. (2017), who introduced the now predominant transformer architecture; and those of Edunov et al. (2018), who apply back-translation at a large scale and, to the best of our knowledge, hold the current best results in the test set. As it can be seen, our unsupervised system outperforms the WMT 2014 shared task winner in English-to-German, and is around 2 BLEU points behind it in the other translation directions. This shows that unsupervised machine translation is already competitive with the state-of-the-art in supervised machine translation in 2014. While the field of machine translation has undergone great progress in the last 5 years, and the gap between our unsupervised system and the current state-ofthe-art in supervised machine translation is still large as reflected by the other results, this suggests that unsupervised machine translation can be a usable alternative in practical settings. 201 Source Reference Artetxe et al. (2018b) Proposed system D’autres révélations ont fait état de documents divulgués par Snowden selon lesquels la NSA avait intercepté des données et des communications émanant du téléphone portable de la chancelière allemande Angela Merkel et de ceux de 34 autres chefs d’État. Other revelations cited documents leaked by Snowden that the NSA monitored German Chancellor Angela Merkel’s cellphone and those of up to 34 other world leaders. Other disclosures have reported documents disclosed by Snowden suggested the NSA had intercepted communications and data from the mobile phone of German Chancellor Angela Merkel and those of 32 other heads of state. Other revelations have pointed to documents disclosed by Snowden that the NSA had intercepted data and communications emanating from German Chancellor Angela Merkel’s mobile phone and those of 34 other heads of state. La NHTSA n’a pas pu examiner la lettre d’information aux propriétaires en raison de l’arrêt de 16 jours des activités gouvernementales, ce qui a ralenti la croissance des ventes de véhicules en octobre. NHTSA could not review the owner notification letter due to the 16-day government shutdown, which tempered auto sales growth in October. The NHTSA could not consider the letter of information to owners because of halting 16-day government activities, which slowed the growth in vehicle sales in October. NHTSA said it could not examine the letter of information to owners because of the 16-day halt in government operations, which slowed vehicle sales growth in October. Le M23 est né d’une mutinerie, en avril 2012, d’anciens rebelles, essentiellement tutsi, intégrés dans l’armée en 2009 après un accord de paix. The M23 was born of an April 2012 mutiny by former rebels, principally Tutsis who were integrated into the army in 2009 following a peace agreement. M23 began as a mutiny in April 2012, former rebels, mainly Tutsi integrated into the national army in 2009 after a peace deal. The M23 was born into a mutiny in April 2012, of former rebels, mostly Tutsi, embedded in the army in 2009 after a peace deal. Tunks a déclaré au Sunday Telegraph de Sydney que toute la famille était «extrêmement préoccupée» du bienêtre de sa fille et voulait qu’elle rentre en Australie. Tunks told Sydney’s Sunday Telegraph the whole family was “extremely concerned” about his daughter’s welfare and wanted her back in Australia. Tunks told The Times of London from Sydney that the whole family was “extremely concerned” of the welfare of her daughter and wanted it to go in Australia. Tunks told the Sunday Telegraph in Sydney that the whole family was “extremely concerned” about her daughter’s well-being and wanted her to go into Australia. Table 4: Randomly chosen translation examples from French→English newstest2014 in comparison of those reported by Artetxe et al. (2018b). 5.3 Qualitative results Table 4 shows some translation examples from our proposed system in comparison to those reported by Artetxe et al. (2018b). We choose the exact same sentences reported by Artetxe et al. (2018b), which were randomly taken from newstest2014, so they should be representative of the general behavior of both systems. While not perfect, our proposed system produces generally fluent translations that accurately capture the meaning of the original text. Just in line with our quantitative results, this suggests that unsupervised machine translation can be a usable alternative in practical settings. Compared to Artetxe et al. (2018b), our translations are generally more fluent, which is not surprising given that they are produced by an NMT system rather than an SMT system. In addition to that, the system of Artetxe et al. (2018b) has some adequacy issues when translating named entities and numerals (e.g. 34 →32, Sunday Telegraph → The Times of London), which we do not observe for our proposed system in these examples. 6 Conclusions and future work In this paper, we identify several deficiencies in previous unsupervised SMT systems, and propose a more principled approach that addresses them by incorporating subword information, using a theoretically well founded unsupervised tuning method, and developing a joint refinement procedure. In addition to that, we use our improved SMT approach to initialize a dual NMT model that is further improved through on-the-fly backtranslation. Our experiments show the effectiveness of our approach, as we improve the previous state-of-the-art in unsupervised machine translation by 5-7 BLEU points in French-English and German-English WMT 2014 and 2016. Our code is available as an open source project at https: //github.com/artetxem/monoses. In the future, we would like to explore learnable similarity functions like the one proposed by (McCallum et al., 2005) to compute the characterlevel scores in our initial phrase-table. In addition to that, we would like to incorporate a language modeling loss during NMT training similar to He 202 et al. (2016). Finally, we would like to adapt our approach to more relaxed scenarios with multiple languages and/or small parallel corpora. Acknowledgments This research was partially supported by the Spanish MINECO (UnsupNMT TIN2017-91692EXP and DOMINO PGC2018-102041-B-I00, cofunded by EU FEDER), the BigKnowledge project (BBVA foundation grant 2018), the UPV/EHU (excellence research group), and the NVIDIA GPU grant program. Mikel Artetxe was supported by a doctoral grant from the Spanish MECD. References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462, Vancouver, Canada. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789–798. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632–3642, Brussels, Belgium. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018c. Unsupervised neural machine translation. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). Qing Dou and Kevin Knight. 2012. Large scale decipherment for out-of-domain machine translation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 266–275, Jeju Island, Korea. Association for Computational Linguistics. Qing Dou and Kevin Knight. 2013. Dependency-based decipherment for resource-limited machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1668–1676, Seattle, Washington, USA. Association for Computational Linguistics. Qing Dou, Ashish Vaswani, Kevin Knight, and Chris Dyer. 2015. Unifying bayesian inference and vector space models for improved decipherment. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 836– 845, Beijing, China. Association for Computational Linguistics. Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648, Atlanta, Georgia. Association for Computational Linguistics. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489–500, Brussels, Belgium. Association for Computational Linguistics. Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, et al. 2018. Achieving human parity on automatic chinese to english news translation. arXiv preprint arXiv:1803.05567. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in Neural Information Processing Systems 29, pages 820–828. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified kneser-ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 690–696, Sofia, Bulgaria. Association for Computational Linguistics. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018b. 203 Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049, Brussels, Belgium. Association for Computational Linguistics. Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, volume 10, pages 707–710. Benjamin Marie and Atsushi Fujita. 2018. Unsupervised neural machine translation initialized by unsupervised statistical machine translation. arXiv preprint arXiv:1810.12703. Andrew McCallum, Kedar Bellare, and Fernando Pereira. 2005. A conditional random field for discriminatively-trained finite-state string edit distance. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, pages 388–395. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, pages 3111–3119. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167, Sapporo, Japan. Association for Computational Linguistics. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1–9, Belgium, Brussels. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting bleu scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. Sujith Ravi and Kevin Knight. 2011. Deciphering foreign language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 12– 21, Portland, Oregon, USA. Association for Computational Linguistics. Shuo Ren, Zhirui Zhang, Shujie Liu, Ming Zhou, and Shuai Ma. 2019. Unsupervised neural machine translation with smt as posterior regularization. arXiv preprint arXiv:1901.04112. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Unsupervised neural machine translation with weight sharing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 46–55. Association for Computational Linguistics. Omar Zaidan. 2009. Z-mert: A fully configurable open source tool for minimum error rate training of machine translation systems. The Prague Bulletin of Mathematical Linguistics, 91:79–88. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In The IEEE International Conference on Computer Vision (ICCV).
2019
19
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1969–1979 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1969 Generating Long and Informative Reviews with Aspect-Aware Coarse-to-Fine Decoding Junyi Li1, Wayne Xin Zhao1,2∗, Ji-Rong Wen1,2, and Yang Song3 1School of Information, Renmin University of China 2Beijing Key Laboratory of Big Data Management and Analysis Methods 3Boss Zhipin {lijunyi,jrwen}@ruc.edu.cn [email protected] [email protected] Abstract Generating long and informative review text is a challenging natural language generation task. Previous work focuses on word-level generation, neglecting the importance of topical and syntactic characteristics from natural languages. In this paper, we propose a novel review generation model by characterizing an elaborately designed aspect-aware coarse-tofine generation process. First, we model the aspect transitions to capture the overall content flow. Then, to generate a sentence, an aspectaware sketch will be predicted using an aspectaware decoder. Finally, another decoder fills in the semantic slots by generating corresponding words. Our approach is able to jointly utilize aspect semantics, syntactic sketch, and context information. Extensive experiments results have demonstrated the effectiveness of the proposed model. 1 Introduction In the past decades, online review services (e.g., AMAZON and YELP) have been an important kind of information platforms where users post their feedbacks or comments about products (Kim et al., 2016). Usually, writing an informative and wellstructured review will require considerable efforts by users. To assist the writing process, the task of review generation has been proposed to automatically generate review text for a user given a product and her/his rating on it (Tang et al., 2016; Zhou et al., 2017). In the literature, various methods have been developed for review generation (Tang et al., 2016; Zhou et al., 2017; Ni et al., 2017; Wang and Zhang, 2017; Catherine and Cohen, 2018). Most of these methods adopt Recurrent Neural Networks (RNN) based methods, especially ∗Corresponding author the improved variants of Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit (GRU) (Cho et al., 2014). They fulfill the review generation task by performing the decoding conditioned on useful context information. Usually, an informative review is likely to consist of multiple sentences, containing substantive comments from users. Hence, a major problem of existing RNN-based methods is that they have limited capacities in producing long and informative text. More recently, Generative Adversarial Net (GAN) based methods (Zang and Wan, 2017; Yu et al., 2017; Guo et al., 2018; Xu et al., 2018a) have been proposed to enhance the generation of long, diverse and novel text. However, they still focus on word-level generation, and neglect the importance of topical and syntactic characteristics from natural languages. As found in the literature of linguistics (Pullum, 2010) and writing (Bateman and Zock, 2003), the writing process itself has involved multiple stages focusing on different levels of goals. We argue that an ideal review generation approach should follow the writing procedure of a real user and capture rich characteristics from natural language. With this motivation, we design an elaborative coarseto-fine generation process by considering the aspect semantics and syntactic characteristics. Figure 1 presents an illustrative example for our review generation process. First, we conceive the content flow that is characterized as an aspect sequence. An aspect describes some property or attribute about a product (Zhao et al., 2010), such as sound and service in this example. To generate a sentence, we further create a sentence skeleton containing semantic slots given the aspect semantics. The semantic slots denote the placeholders for useful syntactic information (e.g., Part-ofspeech tags). Finally, the semantic slots are filled with the generated words. The process is repeated 1970 Product ID: *****93428 User ID: *******QXGQ2 Rating: 5 Aspect: Sketch: this NN sounds RB great . i VBD VB this product fast IN the NN . price was WP it would cost on the JJ NN . Review: this microphone sounds surprisingly great . i did get this product fast through the mail . price was what it would cost on the open market . Sound Price Service Black Mini Microphone for iPhone 3G Black Mini Microphone for iPhone 3G Figure 1. An illustrative example for our generation process. We select a sample review on AMAZON. The aspect labels and sketches are manually created for explaining our idea, which will be learned by our model. until all sentences are generated. Based on such a generation process, in this paper, we propose a novel aspect-aware coarse-tofine decoder for generating product reviews. We first utilize unsupervised topic models to extract aspects and tag review sentences with aspect labels. We develop an attention-based RNN decoder to generate the aspect sequence conditioned on the context including users, items and ratings. By modeling the transitions of aspect semantics among sentences, we are able to capture the content flow of the whole review. Then, we generate a semantic template called sketch using an aspect-aware decoder, which represents the sentence skeleton. Finally, we generate the word content according to an informed decoder that considers aspect labels, sketch symbols and previously decoded words. Extensive experiments on three real-world review datasets have demonstrated the effectiveness of the proposed model. To our knowledge, it is the first review generation model that is able to jointly utilize aspect semantics, syntactic sketch, and context information. We decompose the entire generation process into three stages. In this way, the generation of long review text becomes more controllable, since we consider a simpler sequence generation task at each stage. Furthermore, we incorporate language characteristics (e.g., Part-of-Speech tags and ngrams) into the aspect-aware decoder to instruct the generation of well-structured text. 2 Related Work In recent years, researchers have made great progress in natural language generation (NLG) (Zhang et al., 2018; Zhou et al., 2018; Fan et al., 2018). As a special NLG task, automatic review generation has been proposed to assist the writing of online reviews for users. RNN-based methods have been proposed to generate the review content conditioned on useful context information (Tang et al., 2016; Zhou et al., 2017). Especially, the task of review generation is closely related to the studies in recommender systems that aim to predict the preference of a user over products. Hence, several studies propose to couple the solutions of the two lines of research work, and utilize the user-product interactions for improving the review generation (Ni et al., 2017; Wang and Zhang, 2017; Catherine and Cohen, 2018; Ni and McAuley, 2018). Although Ni and McAuley (2018) have explored aspect information to some extent, they characterize the generation process in a single stage and do not perform the coarse-tofine decoding. Besides, the aspect transition patterns have been not modeled. It has been found that RNN models tend to generate short, repetitive, and dull texts (Lin et al., 2018; Luo et al., 2018). For addressing this issue, Generative Adversarial Nets (GAN) based approaches have been recently proposed to generate long, diverse and novel text (Zang and Wan, 2017; Yu et al., 2017; Guo et al., 2018; Xu et al., 2018a). These methods usually utilize reinforcement learning techniques to deal with the generation of discrete symbols. However, they seldom consider the linguistic information from natural languages, which cannot fully address the difficulties of our task. Our work is inspired by the work of using sketches as intermediate representations (Dong and Lapata, 2018; Wiseman et al., 2018; Xu et al., 2018b; Su et al., 2018). These works usually focus on sentence- or utterance-level generation tasks, in which global aspect semantics and transitions have not been considered. Our work is also related to review data mining, especially the studies on topic or aspect extraction from review data (Qiu et al., 2017; Zhao et al., 2010). 3 Problem Formulation A review is a natural language text written by a user u on a product (or item) i with a rating score of r. Let V denote the vocabulary and y1:m = {⟨yj,1, · · · , yj,t, · · · , yj,nj⟩}m j=1 denote a review text consisting of m sentences, where yj,t ∈V denotes the t-th word of the j-th review sentence and nj is the length of the j-th sentence. 1971 We assume that the review generation process is decomposed into three different stages. First, a user generates an aspect sequence representing the major content flow for a review. To generate a sentence, we predict an aspect-aware sketch conditioned on an aspect label. Finally, based on the aspect label and the sketch, we generate the word content for a sentence. The process is repeated until all the sentences are generated. Let A denote a set of A aspects in our collection. Following (Zhao et al., 2010), we assume each review sentence is associated with an aspect label, describing some property or attribute about a product or an item. We derive an aspect sequence for a review text, denoted by a1:m = ⟨a1, · · · , aj, · · · , am⟩, where aj ∈A is the aspect label (or ID) of the j-th sentence. For each sentence, we assume that it is written according to some semantic sketch, which is also denoted by a symbol sequence. Let s1:m = {⟨sj,1, · · · , sj,t, · · · , sj,n′ j⟩}m j=1, where n′ j is the length of the j-th sketch, and sj,t is the t-th token of the j-th sketch denoting a word, a Part-ofSpeech tag, a bi-gram, etc. Based on the above notations, we are ready to define our task. Given user u, item i and the rating score r, we aim to automatically generate a review that is able to maximize the joint probability of the aspects, sketches and words Pr(y1:m, s1:m, a1:m|c) (1) = Pr(a1:m|c)Pr(s1:m|a1:m, c)Pr(y1:m|a1:m, s1:m, c), = m Y j=1 Pr(aj|a<j, c) Y j,t Pr(sj,t|sj,<t, aj, c) Y j,t Pr(yj,t|yj,<t, sj,t, aj, c), where c = {u, i, r} denotes the set of available context information. Note that, in training, we have aspects and sketches available, and learn the model parameters by optimizing the joint probability in Eq. 1 over all the seen reviews. While, for test, the aspects and sketches are unknown. We need to first infer an aspect sequence and then predict the corresponding sketch for each sentence. Finally, we generate the review content based on the predicted aspect and sketch information. 4 The Proposed Approach Unlike previous works generating the review in a single stage, we decompose the generation proItem User Rating ... ... <s> the NN are the NN are pretty_well </s> ! ! pretty_well Sentence Decoder Sketch Encoder Sketch Decoder <s> vocals are the vocals are pretty_well ! Aspect Decoder Context Encoder sound the sketch ! </s> semantic slot aspect label pretty_well Figure 2. The overview of the proposed review generation model with the example of “the vocals are pretty well". The predicted aspect label is sound, and the generated sketch is “the NN are pretty_well". cess into three stages, namely aspect sequence generation, aspect-aware sketch generation and sketch-based sentence generation. We present an overview illustration of the proposed model in Fig. 2. Next we describe each part in detail. 4.1 Aspect Sequence Generation To learn the model for generating aspect sequences, we need to derive the aspect sequence for training, and then decode the aspect sequence based on the context encoder. Aspect Extraction. Aspects provide an informative summary about the feature or attribute information about a product or an item. For example, aspects of a restaurant may include food, staff and price, etc. It is time-consuming and laborious to manually discover the aspects from texts. Here, we use an automatic unsupervised topic modeling approach to learning the aspects from the review content. Based on the Twitter-LDA model (Zhao et al., 2011), we treat a review as a document consisting of multiple sentences. Each document is associated with a distribution over the aspects. When generating a sentence, an aspect label (or ID) is first sampled according to the document’s distribution over the aspects. Then, the entire sentence is generated according to the word distribution conditioned on the aspect label. To purify the aspect words, we further incorporate a background language model to absorb background words. When topic models have been learned, we can derive a set of A aspect-specific word distributions, denoted by {θa · }, where θa w denotes the probability of a word w from the vocabulary V in aspect a. 1972 Context Encoder. Our aspect generation module adopts an encoder-decoder architecture. We first develop the context encoder based on the information of user u, item i and rating score r. We first use a look-up layer to transform the three kinds of information into low-dimensional vectors. Let vu ∈RdE, vi ∈RdE and vr ∈RdE denote the embeddings for u, i and r respectively. Then, we feed the concatenated vector into a Multi-Layer Perceptron (MLP) and produce a single vectorized representation vc ∈RdC: vc = MLP([vu; vi; vr]). (2) The embedding vc summarizes the necessary information from the three kinds of context data. It is flexible to incorporate more kinds of useful information using a similar approach. Aspect Decoder. The decoder is built upon the GRU-based RNN network. Let hA j ∈RdHA denote a dHA-dimensional hidden vector at the j-th time step, which is computed via: hA j = GRU(hA j−1, vaj−1), (3) where vaj−1 ∈RdA is the embedding of the previous aspect label aj−1. The hidden vector of the first time step is initialized by the encoding vector hA 0 = vc in Eq. 2. Then, RNNs recurrently compute hidden vectors, and predict the next aspect label (or ID) aj. Additionally, we use an attention mechanism (Luong et al., 2015) to enhance the effect of context information. We compute the attention score of context ck for the current time step of the decoder via: w(t) k = exp(tanh(W1[hA t ; vck])) P ck′ ∈{u,i,r} exp(tanh(W1[hA t ; vck′ ])), (4) where W1 is the parameter matrix to learn, and the attention vector ˜ct is obtained by: ˜ct = X ck∈{u,i,r} w(t) k vck (5) Finally, we compute the probability of the j-th aspect label p(at|a<j, c) via: Pr(aj|a<j, c) = softmax(W4˜hA j + b1), (6) ˜hA j = tanh(W2˜cj + W3hA j ), (7) where W2, W3, W4 and b1 are learnable parameter matrices or vector. 4.2 Aspect-Aware Sketch Generation A sketch is a symbol sequence describing the skeleton of a sentence, where each symbol denotes a semantic symbol such as a POS tag or a bi-gram. Similar to the aspect decoder, we also use the GRU-based RNNs to implement the sketch decoder. As shown in Fig. 1, the sketches w.r.t. varying aspects are likely to be different. Hence, we need to consider the effect of aspect information in the generation of a sketch. Let hS j,t ∈RdHS denote a dHS-dimensional hidden vector at time step t for the j-th sketch, which is computed via: hS j,t = GRU(hS j,t−1, xS j,t), (8) where xS j,t is further defined as xj,t = vsj,t−1 ⊙vaj, (9) where vsj,t−1 ∈RdS denotes the embedding for the previous sketch symbol sj,t−1, vaj denotes the embedding of the current aspect, and “⊙" denotes the element-wise product. In this way, the aspect information can be utilized at each time step for generating an entire sketch. We set the initial hidden vector for the j-th sketch as the last embedding of the previous sketch: hS j,0 = hS j−1,n′ j−1. Specifically, we have hS 1,0 = vc for initialization. Similar to Eq. 4 and 5, we can further use an attention mechanism for incorporating context information, and produce a context-enhanced sketch representation ˜hS j,t for time step t. Finally, we compute Pr(sj,t|sj,<t, aj, c) via: Pr(sj,t|sj,<t, aj, c) = softmax(W5˜hS j,t + W6vaj + b2), (10) where we incorporate the embedding vaj of the aspect aj for enhancing the aspect semantics. 4.3 Sketch-based Review Generation When the aspect sequence and the sketches are learned, we can generate the word content of a review. Here, we focus on the generation process of a single sentence. Sketch Encoder. To encode the sketch information, we employ the a bi-directional GRU encoder (Schuster and Paliwal, 1997; Cho et al., 2014) to encode the sketch sequence sj,1:n′ j into a list of hidden vectors {←→ h S j,t} n′ j t=1, where ←→ h S j,t denotes the hidden vector for the t-th position in the j-th sketch at time step t from the encoder. Different from Eq. 8, we use a bi-directional encoder 1973 since the sketch is available at this stage, capturing the global information from the entire sketch. Sentence Decoder. Consider the word generation at time step t. Let vyj,t−1 ∈RdY denotes the embedding of the previous word yj,t−1. As input, we concatenate the current sketch representation and the embedding of the previous word xY j,t = ←→ h S j,t ⊕vyj,t−1, (11) where “⊕" denotes the vector concatenation. Then, we compute the hidden vector hY j,t ∈RdHY for the j-th sentence via: hY j,t = GRU(hY j,t−1, xY j,t). (12) Similar to Eq. 4 and 5, we further leverage the context to obtain an enhanced state representation denoted by ˜hY j,t using the attention mechanism. Then we transform it into an intermediate vector with the dimensionality of the vocabulary size: z = tanh(W7[˜hY j,t; vsj,t] + b3), (13) where vsj,t is the embedding of the sketch symbol sj,t. By incorporating aspect-specific word distributions, we can apply the softmax function to derive the generative probability of the t-th word Pr(yj,t|yj,<t, sj,1:n′ j, aj, c) = softmax(zyj,t + θ aj yj,t), (14) where θaj yj,t is the probability from the word distribution for aspect aj. Here, we boost the importance of the words which have large probabilities in the corresponding topic models. In this process, the generation of words is required to match the generation of sketch symbols slot by slot. Here, we align words and sketch symbols by using the same indices for each slot for ease of understanding. However, the length of the sketch is not necessarily equal to that of the generated sentence, since a sketch symbol can correspond to a multiterm phrase. When the sketch token is a term or a phrase (e.g., bi-grams), we directly copy the original terms or phases to the output slot(s). 4.4 Training and Inference Integrating Eq. 6, 10 and 14 into Eq. 1, we derive the joint model for review generation. We take the log likelihood of Eq. 1 over all training reviews as the objective function. The joint objective function is difficult to be directly optimized. Hence, we Datasets #Users #Items #Reviews #Words AMAZON 89,672 31,829 681,004 22,570 YELP 95,617 37,112 1,063,420 31,861 RATEBEER 12,266 51,365 2,487,369 42,757 Table 1. Statistics of our datasets after preprocessing. incrementally train the three parts, and fine-tune the shared or dependent parameters in different modules with the joint objective. For training, we directly use the real aspects and sketches for learning the model parameters. For inference, we apply our model in a pipeline way: we first infer the aspect, then predict the sketches and finally generate the words using inferred aspects and sketches. During inference, for sequence generation, we apply the beam search method with beam size 4. In the three sequence generation modules of our model, we incorporate two special symbols to indicate the start and end of a sequence, namely START and END. Once we generate the END symbol, the generation process will be stopped. Besides, we set the maximum generation lengths for aspect sequence and sketch sequence to be 5 and 50, respectively. In the training procedure, we adopt the Adam optimizer (Kingma and Ba, 2014). In order to avoid overfitting, we adopt the dropout strategy with a rate of 0.2. More implementation details can be found in Section 5.1 (see Table 2). 5 Experiments In this section, we first set up the experiments, and then report the results and analysis. 5.1 Experimental Setup Datasets. We evaluate our model on three real-world review datasets, including AMAZON Electronic dataset (He and McAuley, 2016), YELP Restaurant dataset1, and RATEBEER dataset (McAuley et al., 2012). We convert all text into lowercase, and perform tokenization using NLTK2. We keep the words occurring at least ten times as vocabulary words. We discard reviews with more than 100 tokens, and remove users and products (or items) occurring fewer than five times. The reviews of each dataset are randomly split into training, validation and test sets (80%/10%/10%). The detailed statistics of the three datasets are summarized in Table 1. 1https://www.yelp.com/dataset 2https://www.nltk.org 1974 Modules Settings Aspect dA = 512, dE = 512, dHA = 512, #GRU-layer=2, batch-size=1024, init.-learning-rate=0.00002, Adam optimizer Sketch dS = 512, dHS = 512, #GRU-layer=2, batch-size=64, init.-learning-rate=0.0002, learning-rate-decay-factor=0.8, learning-rate-decay-epoch=2, Adam optimizer Review dY = 512, dHY = 512, #GRU-layer=2, batch-size=64, init.-learning-rate=0.0002, learning-rate-decay-factor=0.8, learning-rate-decay-epoch=2, Adam optimizer Table 2. Parameter settings of the three modules in our model. Aspect and Sketch Extraction. After the preprocessing, we use the Twitter-LDA model in (Zhao et al., 2011) for automatically learning the aspects and aspect keywords. The numbers of aspects are set to 10, 5, and 5 for the three datasets, respectively. The aspect numbers are selected using the perplexity score on validation set. By inspecting into the top aspect words, we find the learned aspects are very coherent and meaningful. For convenience, we ask a human labeler to annotate each learned aspect from topic models with an aspect label. Note that aspect labels are only for ease of presentation, and will not be used in our model. With topic models, we further tag each sentence with the aspect label which gives the maximum posterior probability conditioned on the words. To derive the sketches, we first extract the most popular 200 bi-grams and tri-grams by frequency. We replace their occurrences with n-gram IDs. Furthermore, we keep the words ranked in top 50 positions of an aspect, and replace the occurrences of the rest words with their Part-of-Speech tags. We also keep the top 50 frequent words in the entire text collection, such as background words “I" and “am". In this way, for each review, we obtain a sequence of aspect labels; for each sentence in the review, we obtain a sequence of sketch symbols. Aspect sequences and sketch sequences are only available during the training process. Baseline Models. We compare our model against a number of baseline models: • gC2S (Tang et al., 2016): It adopts an encoder-decoder architecture to generate review texts conditioned on context information through a gating mechanism. • Attr2Seq (Zhou et al., 2017): It adopts an attention-enhanced attribute-to-sequence architecture to generate reviews with input attributes. • TransNets (Catherine and Cohen, 2018): It applies a student-teacher like architecture for review generation by representing the reviews of a user and an item into a text-related representation, which is regularized to be similar to the actual review’s latent representation at training time. • ExpansionNet (Ni and McAuley, 2018): It uses an encoder-decoder framework to generate personalized reviews by incorporating short phrases (e.g., review summaries, product titles) provided as input and introducing aspect-level information (e.g., aspect words). • SeqGAN (Yu et al., 2017): It regards the generative model as a stochastic parameterized policy and uses Monte Carlo search to approximate the state-action value. The discriminator is a binary classifier to evaluate the sequence and guide the learning of the generative model. • LeakGAN (Guo et al., 2018): The generator is built upon a hierarchical reinforcement learning architecture, which consists of a high-level module and a low-level module, and the discriminator is a CNN-based feature extractor. The advantage is that this model can generate high-quality long text by introducing the leaked mechanism. Among these baselines, gC2S, Attr2Seq and TransNets are context-aware generation models in different implementation approaches, ExpansionNet introduces external information such as aspect words, and SeqGAN and LeakGAN are GAN based text generation models. Original SeqGAN and LeakGAN are designed for general sequence generation without considering context information (e.g., user, item, rating). The learned aspect keywords are provided as input for both ExpansionNet and our model. All the methods have several parameters to tune. We employ validation set to optimize the parameters in each method. To reproduce the results of our model, we report the parameter setting used throughout the experiments in Table 2. Our code is available at https://github.com/turboLJY/ Coarse-to-Fine-Review-Generation. Evaluation Metrics. To evaluate the performance of different methods on automatic review generation, we adopt six evaluation metrics, including Perplexity, BLEU-1/BLEU-4, ROUGE1/ROUGE-2/ROUGE-L. Perplexity3 is the standard measure for evaluating language models; 3https://en.wikipedia.org/wiki/Perplexity 1975 Datasets Models Perplexity BLEU-1(%) BLEU-4(%) ROUGE-1 ROUGE-2 ROUGE-L AMAZON gC2S 38.67 24.14 0.85 0.262 0.046 0.212 Attr2Seq 34.67 24.28 0.88 0.263 0.043 0.214 TransNets 34.21 21.61 0.60 0.227 0.026 0.199 ExpansionNet 31.50 26.56 0.95 0.290 0.052 0.262 SeqGAN 28.50 25.18 0.84 0.265 0.043 0.220 LeakGAN 27.66 25.66 0.92 0.267 0.050 0.236 Our model 26.55 28.22 1.04 0.315 0.066 0.280 YELP gC2S 35.52 24.39 0.87 0.243 0.046 0.188 Attr2Seq 33.12 24.71 0.89 0.245 0.047 0.191 TransNets 34.81 21.41 0.35 0.202 0.026 0.156 ExpansionNet 29.53 27.46 1.06 0.276 0.061 0.216 SeqGAN 26.84 24.83 0.99 0.253 0.054 0.192 LeakGAN 25.53 25.96 1.03 0.271 0.056 0.208 Our model 23.96 29.43 1.13 0.284 0.070 0.235 RATEBEER gC2S 17.81 32.13 5.55 0.379 0.140 0.331 Attr2Seq 16.84 32.21 5.80 0.380 0.142 0.331 TransNets 19.08 29.74 3.61 0.347 0.114 0.302 ExpansionNet 17.07 34.53 6.83 0.400 0.156 0.376 SeqGAN 14.30 32.41 5.62 0.369 0.146 0.337 LeakGAN 13.74 33.76 6.03 0.378 0.142 0.355 Our model 13.07 36.11 7.04 0.422 0.164 0.393 Table 3. Performance comparisons of different methods for automatic review generation using three datasets. BLEU (Papineni et al., 2002) measures the ratios of the co-occurrences of n-grams between the generated and real reviews; ROUGE (Lin, 2004) measures the review quality by counting the overlapping n-grams between the generated and real reviews. 5.2 Results and Analysis In this subsection, we construct a series of experiments on the effectiveness of the proposed model for the review generation task. Main Results. Table 3 presents the performance of different methods on automatic review generation. We can make the following observations. First, among the three context-based baselines, gC2S and Attr2Seq perform better than TransNets. The two models have similar network architectures, which are simpler than TransNets. We find they are easier to obtain a stable performance on large datasets. Second, GAN-based methods work better than the above baselines, especially LeakGAN. LeakGAN is specially designed for generating long text, and we adapt it to our task by incorporating context information. Third, ExpansionNet performs best among all the baseline models. A major reason is that it incorporates external knowledge such as review summaries, product titles and aspect keywords. Finally, our model outperforms all the baselines with a large margin. These baseline methods perform the generation in Models BLEU-1(%) ROUGE-1 Our model 28.22 0.315 w/o aspect 27.85 0.296 w/o sketch 25.95 0.273 Table 4. Ablation analysis on AMAZON dataset. a single stage. As a comparison, we use a multistage process to gradually generate long and informative reviews in a coarse-to-fine way. Our model is able to better utilize aspect semantics and syntactic sketch, which is the key of the performance improvement over baselines. Overall, the three datasets show the similar findings. In what follows, we will report the results on AMAZON data due to space limit. We select the best two baselines ExpansionNet and LeakGAN as reference methods. Ablation Analysis. The major novelty of our model is that it incorporates two specific modules to generate aspects and sketches respectively. To examine the contribution of the two modules, we compare our model with its two variants by removing either of the two modules. We present the BLEU-1 and ROUGE-1 results of our model and its two variants in Table 4. As we can see, both components are useful to improve the final performance, and the sketch generation module seems more important in our task. In our model, the aspect generation module is used to cover aspect semantics and generate informative review; 1976 the sketch generation module is able to utilize syntactic templates to improve the generation fluency, especially for long sentences. Current experiments evaluate the usefulness of the two modules based on the overall generation quality. Next, we verify their functions using two specific experiments, namely aspect coverage and fluency evaluation. Aspect Coverage Evaluation. A generated review is informative if it can effectively capture the semantic information of the real review. Following (Ni and McAuley, 2018), we examine the aspect coverage of different models. Recall that we have used topic models to tag each sentence with an aspect label (or ID). We analyze the average number of aspects in real and generated reviews, and compute on average how many aspects in real reviews are covered in generated reviews. We consider a review as covering an aspect if any of the top 50 words of an aspect exists in the review4. In Table 5, we first see an interesting observation that LeakGAN is able to generate more aspects but yield fewer real aspects. As a comparison, ExpansionNet and our model perform better than LeakGAN by covering more real aspects, since the two models use the aspect information to instruct the review generation. Our model is better than ExpansionNet by characterizing the aspect transition sequences. These results indicate the usefulness of the aspect generation module in capturing more semantic information related to a review. Fluency Evaluation. We continue to evaluate the usefulness of the sketch generation module in improving the fluency of the generated text. Following (Xu et al., 2018a), we construct the fluency evaluation to examine how likely the generated text is produced by human. We randomly choose 200 samples from test set. A sample contains the input contexts (i.e., user, item, rating), and the texts generated by different models. It is difficult to develop automatic evaluation methods for accurate fluency evaluation. Here, we invite two human annotators (excluding the authors of this paper) who have good knowledge in the domain of electronic reviews to assign scores to the generated reviews. They are required to assign a score to a generated (or real) review according to a 5point Likert scale5 on fluency. In the 5-point Lik4For accuracy, we manually remove the irrelevant words (about 5%∼10%) from the top 50 words in each aspect. 5https://en.wikipedia.org/wiki/Likert_scale Models # aspects (real) # aspects (generated) # covered aspects ExpansionNet 2.41 2.02 0.885 LeakGAN 2.41 2.18 0.630 Our model 2.41 2.03 1.076 Table 5. Aspect coverage evaluation on AMAZON dataset. Measures Gold ExpansionNet LeakGAN Our Fluency 4.01 3.29 3.26 3.54 Kappa 0.80 0.72 0.76 0.74 Table 6. Fluency evaluation on AMAZON dataset. ert scale, 5-point means “very satisfying”, while 1-point means “very terrible”. We further average the two annotated scores over the 200 inputs. The results are shown in Table 6. We can see that our model achieves the highest fluency score among the automatic methods. By using sketches, our model is able to leverage the learned syntactic patterns from available reviews. The Cohen’s kappa coefficients are above 0.7, indicating a high correlation and agreement between the two human annotators. 5.3 Qualitative Analysis In this part, we perform the qualitative analysis on the quality of the generated reviews. We present three sample reviews generated by our model in Table 7. As we can see, our model has covered most of the major aspects (with many overlapping aspect keywords) of the real reviews. Although some generated sentences do not follow the exact syntactic structures of real reviews, they are very readable to users. Our model is able to generate aspect-aware sketches, which are very helpful to instruct the generation of the word content. With the aspect and sketch generation modules, our model is able to produce informative reviews consisting of multiple well-structured sentences. Another interesting observation is that the polarities of the generated text also correspond to their real rating scores, since the rating score has been modeled in the context encoder. 6 Conclusion This paper presented a novel review generation model using an aspect-aware coarse-to-fine generation process. Unlike previous methods, our model decomposed the generation process into three stages focusing on different goals. We constructed extensive experiments on three real-world review datasets. The results have demonstrated 1977 Gold Standard Generated Sketch Generated Review the shipping was quick and easyservicevery good product at a reasonable price price 5mm male to 2 rca stereo audio cable sound highly recommend this product to anyoneoverall this cable worked_perfectly for my NNSsound the price was very JJ and i would_purchase NN from this NNprice it VBD on_time and in good NNservice i would_recommend itoverall this cable worked perfectly for my needssound the price was very reasonable and i would purchase another from this vendorprice it arrived on time and in good conditionservice i would recommend itoverall oxtail was good other than the flavors were very bland food place is small so if the tables are full be prepared to waitplace pay too much for what you getprice i will not be back to this locationoverall i had the NN NN and it was very JJfood the staff was JJ but service was a little JJservice i had a bad_experience at this NNplace i VBP not JJ if i will be back RBoverall i had the falafel wrap and it was very bland food the staff was friendly but service was a little slowservice i had a bad_experience at this place place i am not sure if i will be back againoverall the aroma is insanely sour from bad hopsaroma dark clear ruby red beat sugar flavor and strong alcohol in aftertasteflavor golden body with a small white head body dont waste your money on thisoverall VBZ an amber_body with a JJ NN headbody the flavor is very JJ with notes of NNflavor this beer has the JJS aroma of canned_corn i have ever VBNaroma pours an amber body with a white finger headbody the flavor is very horrible with notes of alcohol flavor this beer has the worst aroma of canned corn i have ever smelledaroma Table 7. Samples of the generated reviews by our model. The three reviews with rating scores of 5 (positive), 3 (neutral), and 1 (negative) are from AMAZON, YELP and RATEBEER datasets, respectively. For privacy, we omit the UIDs and PIDs. For ease of reading, colored aspect labels are manually created corresponding to the predicted aspect IDs by our model. We have underlined important overlapping terms between real and generated reviews. the effectiveness of our model in terms of overall generation quality, aspect coverage, and fluency. As future work, we will consider integrating more kinds of syntactic features from linguistic analysis such as dependency parsing. Acknowledgments This work was partially supported by the National Natural Science Foundation of China under Grant No. 61872369 and 61832017, the Fundamental Research Funds for the Central Universities, the Research Funds of Renmin University of China under Grant No. 18XNLG22 and 19XNQ047. References John Bateman and Michael Zock. 2003. Natural language generation. In The Oxford Handbook of Computational Linguistics 2nd edition. Rose Catherine and William Cohen. 2018. Transnets for review generation. Kyunghyun Cho, Bart van Merrienboer, Çaglar Gülçehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1724– 1734. Li Dong and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 731–742. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 889–898. Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. 2018. Long text generation via adversarial training with leaked information. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5141–5148. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proceedings of the 25th International Conference on World Wide Web, WWW 2016, Montreal, Canada, April 11 - 15, 2016, pages 507–517. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Bona Kim, Seongseop Kim, and Cindy Y Heo. 2016. Analysis of satisfiers and dissatisfiers in online hotel reviews on social media. International Journal of Contemporary Hospitality Management, 28(9):1915–1936. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out. Junyang Lin, Xu Sun, Shuming Ma, and Qi Su. 2018. Global encoding for abstractive summarization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 163–169. 1978 Liangchen Luo, Jingjing Xu, Junyang Lin, Qi Zeng, and Xu Sun. 2018. An auto-encoder matching model for learning utterance-level semantic dependency in dialogue generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 702–707. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1412–1421. Julian J. McAuley, Jure Leskovec, and Dan Jurafsky. 2012. Learning attitudes and attributes from multiaspect reviews. In 12th IEEE International Conference on Data Mining, ICDM 2012, Brussels, Belgium, December 10-13, 2012, pages 1020–1025. Jianmo Ni, Zachary C. Lipton, Sharad Vikram, and Julian McAuley. 2017. Estimating reactions and recommending products with generative models of reviews. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, IJCNLP 2017, Taipei, Taiwan, November 27 - December 1, 2017 - Volume 1: Long Papers, pages 783–791. Jianmo Ni and Julian McAuley. 2018. Personalized review generation by expanding phrases and attending on aspect-aware representations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 706–711. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA., pages 311–318. Geoffrey K Pullum. 2010. The land of the free and the elements of style. English Today, 26(2):34–44. Minghui Qiu, Yinfei Yang, Cen Chen, and Forrest Sheng Bao. 2017. Aspect extraction from product reviews using category hierarchy information. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 2: Short Papers, pages 675–680. Mike Schuster and Kuldip K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Trans. Signal Processing, 45(11):2673–2681. Shang-Yu Su, Kai-Ling Lo, Yi Ting Yeh, and YunNung Chen. 2018. Natural language generation by hierarchical decoding with linguistic patterns. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 61–66. Jian Tang, Yifan Yang, Samuel Carton, Ming Zhang, and Qiaozhu Mei. 2016. Context-aware natural language generation with recurrent neural networks. CoRR, abs/1611.09900. Zhongqing Wang and Yue Zhang. 2017. Opinion recommendation using A neural model. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1626–1637. Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2018. Learning neural templates for text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 3174–3187. Jingjing Xu, Xuancheng Ren, Junyang Lin, and Xu Sun. 2018a. Diversity-promoting GAN: A crossentropy based generative adversarial network for diversified text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 November 4, 2018, pages 3940–3949. Jingjing Xu, Xuancheng Ren, Yi Zhang, Qi Zeng, Xiaoyan Cai, and Xu Sun. 2018b. A skeleton-based model for promoting coherence among sentences in narrative story generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4306–4315. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 2852–2858. Hongyu Zang and Xiaojun Wan. 2017. Towards automatic generation of product reviews from aspectsentiment scores. In Proceedings of the 10th International Conference on Natural Language Generation, INLG 2017, Santiago de Compostela, Spain, September 4-7, 2017, pages 168–177. Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2018. Learning to control the specificity in neural response generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1108–1117. Wayne Xin Zhao, Jing Jiang, Jianshu Weng, Jing He, Ee-Peng Lim, Hongfei Yan, and Xiaoming Li. 2011. Comparing twitter and traditional media using topic models. In Advances in Information Retrieval - 33rd European Conference on IR Research, ECIR 2011, 1979 Dublin, Ireland, April 18-21, 2011. Proceedings, pages 338–349. Wayne Xin Zhao, Jing Jiang, Hongfei Yan, and Xiaoming Li. 2010. Jointly modeling aspects and opinions with a maxent-lda hybrid. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP 2010, pages 56–65. Ming Zhou, Mirella Lapata, Furu Wei, Li Dong, Shaohan Huang, and Ke Xu. 2017. Learning to generate product reviews from attributes. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 623–632. Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 1520, 2018, Volume 1: Long Papers, pages 654–663.
2019
190
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1980–1991 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1980 PaperRobot: Incremental Draft Generation of Scientific Ideas Qingyun Wang1, Lifu Huang1, Zhiying Jiang1, Kevin Knight2, Heng Ji1,3, Mohit Bansal4, Yi Luan5 1 Rensselaer Polytechnic Institute 2 DiDi Labs 3 University of Illinois at Urbana-Champaign 4 University of North Carolina at Chapel Hill 5 University of Washington [email protected], [email protected] Abstract We present a PaperRobot who performs as an automatic research assistant by (1) conducting deep understanding of a large collection of human-written papers in a target domain and constructing comprehensive background knowledge graphs (KGs); (2) creating new ideas by predicting links from the background KGs, by combining graph attention and contextual text attention; (3) incrementally writing some key elements of a new paper based on memory-attention networks: from the input title along with predicted related entities to generate a paper abstract, from the abstract to generate conclusion and future work, and finally from future work to generate a title for a follow-on paper. Turing Tests, where a biomedical domain expert is asked to compare a system output and a human-authored string, show PaperRobot generated abstracts, conclusion and future work sections, and new titles are chosen over human-written ones up to 30%, 24% and 12% of the time, respectively.1 1 Introduction Our ambitious goal is to speed up scientific discovery and production by building a PaperRobot, who addresses three main tasks as follows. Read Existing Papers. Scientists now find it difficult to keep up with the overwhelming amount of papers. For example, in the biomedical domain, on average more than 500K papers are published every year2, and more than 1.2 million new papers are published in 2016 alone, bringing the total number of papers to over 26 million (Van Noorden, 2014). However, human’s reading ability 1The programs, data and resources are publicly available for research purpose at: https://github.com/ EagleW/PaperRobot 2http://dan.corlan.net/medline-trend/ language/absolute.html keeps almost the same across years. In 2012, US scientists estimated that they read, on average, only 264 papers per year (1 out of 5000 available papers), which is, statistically, not different from what they reported in an identical survey last conducted in 2005. PaperRobot automatically reads existing papers to build background knowledge graphs (KGs), in which nodes are entities/concepts and edges are the relations between these entities (Section 2.2). Abstract Conclusion and Future Work Human Written Title Old Human Written Papers Enriched Knowledge Graphs New Title Abstract ... 1st Paper 2nd Paper Conclusion and Future Work Figure 1: PaperRobot Incremental Writing Create New Ideas. Scientific discovery can be considered as creating new nodes or links in the knowledge graphs. Creating new nodes usually means discovering new entities (e.g., new proteins) through a series of real laboratory experiments, which is probably too difficult for PaperRobot. In contrast, creating new edges is easier to automate using the background knowledge graph as the starting point. Foster et al. (2015) shows that more than 60% of 6.4 million papers in biomedicine and chemistry are about incremental work. This inspires us to automate the incremental creation of new ideas and hypotheses by predicting new links in background KGs. In fact, when there is more data available, we can construct larger and richer background KGs for more reliable link prediction. Recent work (Ji et al., 2015b) successfully mines strong relevance between drugs and diseases from biomedical pa1981 transcription  Knowledge Extraction Graph Attention Link Prediction Text Attention Knowledge Extraction Entity Retrieval Background Knowledge Graphs Contextual Sentences Salient Entities Old Papers Existing Paper Reading Enriched Knowledge Graphs New Paper Writing Reference Embedding cells  Snail  ... ... Related Entity Embedding nasopharyngeal carcinoma  Maspin  Snail  ... ... Diallyl Disulfide Bidirectional GRU Memory Attention qk weighted sum sum Memory Initialization Hop =1,2,..., k φ ... ... Reference Attention Memory Network ... ... ... Final Distribution Reference Distribution Memory Distribution Language Distribution <SOS> is qk−1 Figure 2: PaperRobot Architecture Overview pers based on KGs constructed from weighted cooccurrence. We propose a new entity representation that combines KG structure and unstructured contextual text for link prediction (Section 2.3). Write a New Paper about New Ideas. The final step is to communicate the new ideas to the reader clearly, which is a very difficult thing to do; many scientists are, in fact, bad writers (Pinker, 2014). Using a novel memory-attention network architecture, PaperRobot automatically writes a new paper abstract about an input title along with predicted related entities, then further writes conclusion and future work based on the abstract, and finally predicts a new title for a future follow-on paper, as shown in Figure 1 (Section 2.4). We choose biomedical science as our target domain due to the sheer volume of available papers. Turing tests show that PaperRobot-generated output strings are sometimes chosen over humanwritten ones; and most paper abstracts only require minimal edits from domain experts to become highly informative and coherent. 2 Approach 2.1 Overview The overall framework of PaperRobot is illustrated in Figure 2. A walk-through example produced from this whole process is shown in Table 1. In the following subsections, we will elaborate on the algorithms for each step. 2.2 Background Knowledge Extraction From a massive collection of existing biomedical papers, we extract entities and their relations to construct background knowledge graphs (KGs). We apply an entity mention extraction and linking system (Wei et al., 2013) to extract mentions of three entity types (Disease, Chemical and Gene) which are the core data categories in the Comparative Toxicogenomics Database (CTD) (Davis et al., 2016), and obtain a Medical Subject Headings (MeSH) Unique ID for each mention. Based on the MeSH Unique IDs, we further link all entities to the CTD and extract 133 subtypes of relations such as Marker/Mechanism, Therapeutic, and Increase Expression. Figure 3 shows an example. 2.3 Link Prediction After constructing the initial KGs from existing papers, we perform link prediction to enrich them. Both contextual text information and graph structure are important to represent an entity, thus we combine them to generate a rich representation for each entity. Based on the entity representations, we determine whether any two entities are semantically similar, and if so, we propagate the neighbors of one entity to the other. For example, in Figure 3, because Calcium and Zinc are similar in terms of contextual text information and graph structure, we predict two new neighbors for Calcium: CD14 molecule and neuropilin 2 which are neighbors of Zinc in the initial KGs. We formulate the initial KGs as a list of tuples numbered from 0 to κ. Each tuple (eh i , ri, et i) is composed of a head entity eh i , a tail entity et i, and their relation ri. Each entity ei may be involved in multiple tuples and its one-hop connected neighbors are denoted as Nei = [ni1, ni2, ...]. ei is 1982 Calcium Zinc caspase 3 cyclin D1 affect reaction affect cotreatment increase cleavage affect cotreatment decrease expression increase expression AKT serine/threonine kinase 1 increases phosphorylation decrease phosphorylation decrease reaction increase reaction CD14 molecule neuropilin 2 paraoxonase 1 prepronociceptin decrease reaction decrease abundance increase reaction increase expression Knowledge Graph Gene Chemical Contextual Sentence: So, Ca2+possibly promoted caspases activation upstream of cytochrome c release, but inactivated caspase activity by calpain and/or fast depletion of ATP; whereas Zn2+ blocked the activation ofprocaspase‐3 with no visible change in the level of cytochrome c, and the block possibly resulted from its direct inhibition on caspase‐3 enzyme. affect transport affect binding Figure 3: Biomedical Knowledge Extraction and Link Prediction Example (dash lines are predicted links) also associated with a context description si which is randomly selected from the sentences where ei occurs. We randomly initialize vector representations ei and ri for ei and ri respectively. Graph Structure Encoder To capture the importance of each neighbor’s feature to ei, we perform self-attention (Veliˇckovi´c et al., 2018) and compute a weight distribution over Nei: e ′ i = Weei, n ′ ij = Wenij cij = LeakyReLU(Wf(e ′ i ⊕n ′ ij)) c ′ i = Softmax(ci) where We is a linear transformation matrix applied to each entity. Wf is the parameter for a single layer feedforward network. ⊕denotes the concatenation operation between two matrices. Then we use c ′ i and Nei to compute a structure based context representation of ϵi = σ P c ′ ijn ′ ij  , where nij ∈Nei and σ is Sigmoid function. In order to capture various types of relations between ei and its neighbors, we further perform multi-head attention on each entity, based on multiple linear transformation matrices. Finally, we get a structure based context representation ˜ei = [ϵ0 i ⊕... ⊕ϵM i ], where ϵm i refers to the context representation obtained with the m-th head, and ˜ei is the concatenated representation based on the attention of all M heads. Contextual Text Encoder Each entity e is also associated with a context sentence [w1, ..., wl]. To incorporate the local context information, we first apply a bi-directional long short-term memory (LSTM) (Graves and Schmidhuber, 2005) network to get the encoder hidden states Hs = [h1, ..., hl], where hi represents the hidden state of wi. Then we compute a bilinear attention weight for each word wi: µi = e⊤Wshi, µ ′ = Softmax(µ), where Ws is a bilinear term. We finally get the context representation ˆe = µ ′⊤hi. Gated Combination To combine the graph-based representation ˜e and local context based representations ˆe, we design a gate function to balance these two types of information: ge = σ(˜ge), e = ge ⊙˜e + (1 −ge) ⊙ˆe where ge is an entity-dependent gate function of which each element is in [0, 1], ˜ge is a learnable parameter for each entity e, σ is a Sigmoid function, and ⊙is an element-wise multiplication. Training and Prediction To optimize both entity and relation representations, following TransE (Bordes et al., 2013), we assume the relation between two entities can be interpreted as translations operated on the entity representations, namely h + r ≈t if (h, r, t) holds. Therefore, for each tuple (eh i , ri, et i), we can compute their distance score: F(eh i , ri, et i) =∥eh i + ri −et i ∥2 2. We use marginal loss to train the model: Loss = X (eh i ,ri,et i)∈K X (¯eh i ,¯ri,¯et i)∈¯ K max(0, γ + F(eh i , ri, et i) −F(¯eh i , ¯ri, ¯et i)) where (eh, r, th) is a positive tuple and (¯eh, ¯rh, ¯th) is a negative tuple, and γ is a margin. The negative tuples are generated by either replacing the head or the tail entity of positive tuples with a randomly chosen different entity. 1983 Title Snail transcription factor negatively regulates maspin tumor suppressor in human prostate cancer cells Entities Related: nasopharyngeal carcinoma; diallyl disulfide Output Human (Neal et al., 2012) System Post-edited by Human Abstract Background: Maspin, a putative tumor suppressor that is down-regulated in breast and prostate cancer, has been associated with decreased cell motility. Snail transcription factor is a zinc finger protein that is increased in breast cancer and is associated with increased tumor motility and invasion by induction of epithelial-mesenchymal transition (EMT). We investigated the molecular mechanisms by which Snail increases tumor motility and invasion utilizing prostate cancer cells. Methods: Expression levels were analyzed by RT-PCR and western blot analyses. Cell motility and invasion assays were performed , while Snail regulation and binding to maspin promoter was analyzed by luciferase reporter and chromatin immunoprecipitation (ChIP) assays. Results: Snail protein expression was higher in different prostate cancer cells lines as compared to normal prostate epithelial cells. Background: Snail is a multifunctional protein that plays an important role in the pathogenesis of prostate cancer. However, it has been shown to be associated with poor prognosis. The purpose of this study was to investigate the effect of negatively on the expression of maspin in human nasopharyngeal carcinoma cell lines. Methods: Quantitative real-time PCR and western blot analysis were used to determine whether the demethylating agent was investigated by quantitative RT-PCR (qRT-PCR) and Western blotting. Results showed that the binding protein plays a significant role in the regulation of tumor growth and progression. Background: Snail is a multifunctional protein that plays an important role in the pathogenesis of prostate cancer. It has been shown associated with poor prognosis. The purpose of this study is to investigate the negative effect of on the expression of Maspin in human nasopharyngeal carcinoma cell lines. Methods: Quantitative RT-PCR (qRT-PCR) and western blot analyses were used to determine correlation of the two proteins expressions. Results showed that the binding protein plays a significant role in the regulation of tumor growth and progression. Conclusion and Future work Collectively, our results indicate for the first time that Snail can negatively regulate maspin through direct promoter repression resulting in increased migration and invasion in prostate cancer cells. This study reveals a novel mechanism of how Snail may function and show the importance of therapeutic targeting of Snail signaling in future. In summary, our study demonstrates that Snail negatively inhibited the expression of Maspin in human nasopharyngeal carcinoma cell lines and in vitro. Our results indicate that the combination of the demethylating agent might be a potential therapeutic target for the treatment of prostate cancer. In summary, our study in vitro demonstrates that Snail negatively inhibits the expression of Maspin in human nasopharyngeal carcinoma cell lines. Our results further indicate that Maspin might be a potential therapeutic target for the treatment of prostate cancer. New Title Role of maspin in cancer (Berardi et al., 2013) The role of nasopharyngeal carcinoma in the rat model of prostate cancer cells The role of Maspin in the rat model of nasopharyngeal carcinoma cells Table 1: Comparison of Human and System Written Paper Elements (bold words are topically related entities; italic words show human edits) After training, for each pair of indirectly connected entities ei, ej and a relation type r, we compute a score y to indicate the probability that (ei, r, ej) holds, and obtain an enriched knowledge graph eK = [(eh κ+1, rκ+1, et κ+1, yκ+1)...]. 2.4 New Paper Writing In this section, we use title-to-abstract generation as a case study to describe the details of our paper writing approach. Other tasks (abstract-toconclusion and future work, and conclusion and future work-to-title) follow the same architecture. Given a reference title τ = [w1, ..., wl], we apply the knowledge extractor (Section 2.2) to extract entities from τ. For each entity, we retrieve a set of related entities from the enriched knowledge graph eK after link prediction. We rank all the related entities by confidence scores and select up to 10 most related entities Eτ = [eτ 1, ..., eτ v]. Then we feed τ and Eτ together into the paper generation framework as shown in Figure 2. The framework is based on a hybrid approach of a Mem2seq model (Madotto et al., 2018) and a pointer generator (Gu et al., 2016; See et al., 2017). It allows us to balance three types of sources for each time step during decoding: the probability of generating a token from the entire word vocabulary based on language model, the probability of copying a word from the reference title, such as regulates in Table 1, and the probability of incorporating a related entity, such as Snail in Table 1. The output is a paragraph Y = [y1, ..., yo].3 Reference Encoder For each word in the refer3During training, we truncate both of the input and the output to around 120 tokens to expedite training. We label the words with frequency < 5 as Out-of-vocabulary. 1984 ence title, we randomly embed it into a vector and obtain τ = [w1, ..., wl]. Then, we apply a bi-directional Gated Recurrent Unit (GRU) encoder (Cho et al., 2014) on τ to produce the encoder hidden states H = [h1, ..., hl]. Decoder Hidden State Initialization Not all predicted entities are equally relevant to the title. For example, for the title in Table 2, we predict multiple related entities including nasopharyngeal carcinoma and diallyl disulfide, but nasopharyngeal carcinoma is more related because nasopharyngeal carcinoma is also a cancer related to snail transcription factor, while diallyl disulfide is less related because diallyl disulfide’s anticancer mechanism is not closely related to maspin tumor suppressor. We propose to apply memoryattention networks to further filter the irrelevant ones. Recent approaches (Sukhbaatar et al., 2015; Madotto et al., 2018) show that compared with soft-attention, memory-based multihop attention is able to refine the attention weight of each memory cell to the query multiple times, drawing better correlations. Therefore, we apply a multihop attention mechanism to generate the initial decoder hidden state. Given the set of related entities E = [e1, ..., ev], we randomly initialize their vector representation E = [e1, ..., ev] and store them in memories. Then we use the last hidden state of reference encoder hl as the first query vector q0, and iteratively compute the attention distribution over all memories and update the query vector: pki = ν⊤ k tanh  W k q qk−1 + U k e ei + bk  qk = p⊤ k e + qk−1 where k denotes the k-th hop among ϕ hops in total.4 After ϕ hops, we obtain qϕ and take it as the initial hidden state of the GRU decoder. Memory Network To better capture the contribution of each entity ej to each decoding output, at each decoding step i, we compute an attention weight for each entity and apply a memory network to refine the weights multiple times. We take the hidden state ˜hi as the initial query ˜q0 = ˜hi and iteratively update it: ˜pkj = ν⊤ k tanh  f W k ˜q ˜qk−1 + eU k e ej + Wˆcˆcij + bk  uik = ˜p ′⊤ k ej, ˜qk = uik + ˜qk−1 4We set ϕ = 3 since it performs the best on the development set. where ˆcij = Pi−1 m=0 βmj is an entity coverage vector and βi is the attention distribution of last hop βi = ˜p ′ ψ, and ψ is the total number of hops. We then obtain a final memory based context vector for the set of related entities χi = uiψ. Reference Attention Our reference attention is similar to (Bahdanau et al., 2015; See et al., 2017), which aims to capture the contribution of each word in the reference title to the decoding output. At each time step i, the decoder receives the previous word embedding and generate decoder state ˜hi, the attention weight of each reference token is computed as: αij = ς⊤tanh  Wh˜hi + Wτhj + W˜c˜cij + bτ  α ′ i = Softmax (αi) ; φi = α ′⊤ i hj ˜cij = Pi−1 m=0 αmj is a reference coverage vector, which is the sum of attention distributions over all previous decoder time steps to reduce repetition (See et al., 2017). φi is the reference context vector. Generator For a particular word w, it may occur multiple times in the reference title or in multiple related entities. Therefore, at each decoding step i, for each word w, we aggregate its attention weights from the reference attention and memory attention distributions: P i τ = P m|wm=w α ′ im and P i e = P m|w∈em βim respectively. In addition, at each decoding step i, each word in the vocabulary may also be generated with a probability according to the language model. The probability is computed from the decoder state ˜hi, the reference context vector φi, and the memory context vector χi: Pgen = Softmax(Wgen[˜hi; φi; χi] + bgen), where Wgen and bgen are learnable parameters. To combine Pτ, Pe and Pgen, we compute a gate gτ as a soft switch between generating a word from the vocabulary and copying words from the reference title τ or the related entities E: gp = σ(W ⊤ p ˜hi + W ⊤ z zi−1 + bp), where zi−1 is the embedding of the previous generated token at step i −1. Wp, Wz, and bp are learnable parameters, and σ is a Sigmoid function. We also compute a gate ˜gp as a soft switch between copying words from reference text and the related entities: ˜gp = σ(W ⊤ φ φi + W ⊤ χ χi + ˜bp), where Wφ, Wχ, and ˜bp are learnable parameters. The final probability of generating a token z at decoding step i can be computed by: P(zi) = gpPgen + (1 −gp) (˜gpPτ + (1 −˜gp)Pe) 1985 Dataset # papers # avg entities in Title / paper # avg predicted related entities / paper Title-toAbstract Abstract-to-Conclusion and Future work Conclusion and Future work-to-Title Training 22,811 22,811 15,902 4.8 Development 2,095 2,095 2,095 5.6 6.1 Test 2,095 2,095 2,095 5.7 8.5 Table 2: Paper Writing Statistics Model Title-to-Abstract Abstract-to-Conclusion and Future Work Conclusion and Future Work-to-Title Perplexity METEOR Perplexity METEOR Perplexity METEOR Seq2seq (Bahdanau et al., 2015) 19.6 9.1 44.4 8.6 49.7 6.0 Editing Network (Wang et al., 2018b) 18.8 9.2 30.5 8.7 55.7 5.5 Pointer Network (See et al., 2017) 146.7 8.5 74.0 8.1 47.1 6.6 Our Approach (-Repetition Removal) 13.4 12.4 24.9 12.3 31.8 7.4 Our Approach 11.5 13.0 18.3 11.2 14.8 8.9 Table 3: Automatic Evaluation on Paper Writing for Diagnostic Tasks (%). The Pointer Network can be viewed as removing memory network part from our approach without repetition removal. The loss function, combined with the coverage loss (See et al., 2017) for both reference attention and memory distribution, is presented as: Loss = X i −log P(zi) + λ X i (min (αij, ˜cij) + min (βij, ˆcij)) where P(zi) is the prediction probability of the ground truth token zi, and λ is a hyperparameter. Repetition Removal Similar to many other long text generation tasks (Suzuki and Nagata, 2017), repetition remains a major challenge (Foster and White, 2007; Xie, 2017). In fact, 11% sentences in human written abstracts include repeated entities, which may mislead the language model. Following the coverage mechanism proposed by (Tu et al., 2016; See et al., 2017), we use a coverage loss to avoid any entity in reference input text or related entity receiving attention multiple times. We further design a new and simple masking method to remove repetition during the test time. We apply beam search with beam size 4 to generate each output, if a word is not a stop word or punctuation and it is already generated in the previous context, we will not choose it again in the same output. 3 Experiment 3.1 Data We collect biomedical papers from the PMC Open Access Subset.5 To construct ground truth for new title prediction, if a human written paper A 5ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/ oa_package/ cites a paper B, we assume the title of A is generated from B’s conclusion and future work session. We construct background knowledge graphs from 1,687,060 papers which include 30,483 entities and 875,698 relations. Tables 2 shows the detailed data statistics. The hyperparameters of our model are presented in the Appendix. 3.2 Automatic Evaluation Previous work (Liu et al., 2016; Li et al., 2016; Lowe et al., 2015) has proven it to be a major challenge to automatically evaluate long text generation. Following the story generation work (Fan et al., 2018), we use METEOR (Denkowski and Lavie, 2014) to measure the topic relevance towards given titles and use perplexity to further evaluate the quality of the language model. The perplexity scores of our model are based on the language model6 learned on other PubMed papers (500,000 titles, 50,000 abstracts, 50,000 conclusions and future work) which are not used for training or testing in our experiment.7 The results are shown in Table 3. We can see that our framework outperforms all previous approaches. 3.3 Turing Test Similar to (Wang et al., 2018b), we conduct Turing tests by a biomedical expert (non-native speaker) and a non-expert (native speaker). Each human judge is asked to compare a system output and a human-authored string, and select the better one. 6https://github.com/pytorch/examples/ tree/master/word_language_model 7The perplexity scores of the language model are in the Appendix. 1986 Task Input Output Domain Expert Non-expert End-to-End Human Title Different Abstract (1st) 10 30 Same 30 16 System Abstract Different Conclusion and Future work 12 0 Same 8 8 System Conclusion and Future work Different Title 12 2 Same 12 25 System Title Different Abstract (2nd) 14 4 Diagnostic Human Abstract Different Conclusion and Future work 12 14 Same 24 20 Human Conclusion and Future work Different Title 8 12 Same 2 10 Table 4: Turing Test Human Subject Passing Rates (%). Percentages show how often a human judge chooses our system’s output over human’s when it is mixed with a human-authored string. If the output strings (e.g., abstracts) are based on the same input string (e.g., title), the Input condition is marked “Same”, otherwise “Different”. BLEU1 BLEU2 BLEU3 BLEU4 ROUGE TER 59.6 58.1 56.7 55.4 73.3 35.2 Table 5: Evaluation on Human Post-Editing(%) Table 4 shows the results on 50 pairs in each setting. We can see that PaperRobot generated abstracts are chosen over human-written ones by the expert up to 30% times, conclusion and future work up to 24% times, and new titles up to 12% times. We don’t observe the domain expert performs significantly better than the non-expert, because they tend to focus on different aspects the expert focuses on content (entities, topics, etc.) while the non-expert focuses on the language. 3.4 Human Post-Editing In order to measure the effectiveness of PaperRobot acting as a wring assistant, we randomly select 50 paper abstracts generated by the system during the first iteration and ask the domain expert to edit them until he thinks they are informative and coherent. The BLEU (Papineni et al., 2002), ROUGE (Lin, 2004) and TER (Snover et al., 2006) scores by comparing the abstracts before and after human editing are presented in Table 5. It took about 40 minutes for the expert to finish editing 50 abstracts. Table 1 includes the post-edited example. We can see that most edits are stylist changes. 3.5 Analysis and Discussions To better justify the function of each component, we conduct ablation studies by removing memory networks, link prediction, and repetition removal respectively. The results are shown in Table 6. We can see that the approach without memory networks tends to diverge from the main topic, especially for generating long texts such as abstracts (the detailed length statistics are shown in Table 8). From Table 6 we can see the later parts of the abstract (Methods and Results) include topically irrelevant entities such as “imipramine” which is used to treat depression instead of human prostate cancer. Link prediction successfully introduces new and topically related ideas, such as “RT-PCR” and “western blot” which are two methods for analyzing the expression level of Snail protein, as also mentioned in the human written abstract in Table 1. Table 7 shows more examples of entities which are related to the entities in input titles based on link prediction. We can see that the predicted entities are often genes or proteins which cause the disease mentioned in a given title, or other diseases from the same family. Our simple beam search based masking method successfully removes some repeated words and phrases and thus produces more informative output. The plagiarism check in Table 9 shows our model is creative, because it’s not simply copying from the human input. 3.6 Remaining Challenges Our generation model is still largely dependent on language model and extracted facts, and thus it lacks of knowledge reasoning. It generates a few incorrect abbreviations such as “Organophosphates(BA)”, “chronic kidney disease(UC)” and “Fibrosis(DC)”) because they appear rarely in the training data and thus their contextual representations are not reliable. It also generates some incorrect numbers (e.g., “The patients were divided into four groups : Group 1 , Group B...”) and pronouns (e.g., “A 63-year-old man was referred to our hospital ... she was treated with the use of the descending coronary artery” ). 1987 Output Without Memory Networks Without Link Prediction Without Repetition Removal Abstract Background: Snail has been reported to exhibit a variety of biological functions. In this study, we investigated the effect of negatively on maspin demethylation in human prostate cancer cells. Methods: Quantitative real-time PCR and western blot analysis were used to investigate the effects of the demethylating agent on the expression of the protein kinase (TF) gene promoter. Results: The results showed that the presence of a single dose of 50 µM in a dose-dependent manner, whereas the level of the BMP imipramine was significantly higher than that of the control group. Background: Snail has been shown to be associated with poor prognosis. In this study, we investigated the effect of negatively on the expression of maspin in human prostate cancer cells. Methods: Cells were treated with a single dose of radiotherapy for 24 h, and was used to investigate the significance of a quantitative factor for the treatment of the disease. Results: The remaining controls showed a significant increase in the G2/M phase of the tumor suppressor protein (p<0.05). Background: Snail is a major health problem in human malignancies. However, the role of Snail on the expression of maspin in human prostate cancer cells is not well understood. The aim of this study was to investigate the effect of Snail on the expression of maspin in human prostate cancer cells. Methods: The expression of the expression of Snail and maspin was investigated using quantitative RT-PCR and western blot analysis. Results: The remaining overall survival (OS) and overall survival (OS) were analyzed. Conclusion and Future work In summary, our study demonstrated that negatively inhibited the expression of the BMP imipramine in human prostate cancer cells. Our findings suggest that the inhibition of maspin may be a promising therapeutic strategy for the treatment. In summary, our results demonstrate that negatively inhibited the expression of maspin in human prostate cancer cells. Our findings suggest that the combination of radiotherapy may be a potential therapeutic target for the treatment of disease. In summary, our results demonstrate that snail inhibited the expression of maspin in human prostatic cells. The expression of snail in PC-3 cells by snail, and the expression of maspin was observed in the presence of the expression of maspin. New Title Protective effects of homolog on human breast cancer cells by inhibiting the Endoplasmic Reticulum Stress The role of prostate cancer in human breast cancer cells The role of maspin and maspin in human breast cancer cells Table 6: Ablation Test Results on the Same Title in Table 1 Titles Predicted Related Entities Pseudoachondroplasia/COMP translating from the bench to the bedside osteoarthritis; skeletal dysplasia; thrombospondin-5 Role of ceramide in diabetes mellitus: evidence and mechanisms diabetes insulin ceramide; metabolic disease Exuberant clinical picture of Buschke-Fischer-Brauer palmoplantar keratoderma in bedridden patient neoplasms; retinoids; autosomal dominant disease Relationship between serum adipokine levels and radiographic progression in patients with ankylosing spondylitis leptin; rheumatic diseases; adiponectin; necrosis; DKK-1; IL-6-RFP Table 7: More Link Prediction Examples (bold words are entities detected from titles) Abstract Conclusion and Future Work Title System 112.4 88.1 16.5 Human 106.5 105.5 13.0 Table 8: The Average Number of Words of System and Human Output Output 1 2 3 4 5 Abstracts 58.3 20.1 8.03 3.60 1.46 Conclusions 43.8 12.5 5.52 2.58 1.28 Titles 20.1 1.31 0.23 0.06 0.00 Table 9: Plagiarism Check: Percentage (%) of n-grams in human input which appear in system generated output for test data. All of the system generated titles are declarative sentences while human generated titles are often more engaging (e.g., “Does HPV play any role in the initiation or prognosis of endometrial adenocarcinomas?”). Human generated titles often include more concrete and detailed ideas such as “etumorType , An Algorithm of Discriminating Cancer Types for Circulating Tumor Cells or Cellfree DNAs in Blood”, and even create new entity abbreviations such as etumorType in this example. 3.7 Requirements to Make PaperRobot Work: Case Study on NLP Domain When a cool Natural Language Processing (NLP) system like PaperRobot is built, it’s natural to ask whether she can benefit the NLP community itself. We re-build the system based on 23,594 NLP papers from the new ACL Anthology Network (Radev et al., 2013). For knowledge extraction we apply our previous system trained for the NLP domain (Luan et al., 2018). But the results are much less satisfactory compared to the 1988 biomedical domain. Due to the small size of data, the language model is not able to effectively copy out-of-vocabulary words and thus the output is often too generic. For example, given a title “Statistics based hybrid approach to Chinese base phrase identification”, PaperRobot generates a fluent but uninformative abstract “This paper describes a novel approach to the task of Chinese-base-phrase identification. We first utilize the solid foundation for the Chinese parser, and we show that our tool can be easily extended to meet the needs of the sentence structure.”. Moreover, compared to the biomedical domain, the types of entities and relations in the NLP domain are rather coarse-grained, which often leads to inaccurate prediction of related entities. For example, for an NLP paper title “Extracting molecular binding relationships from biomedical text”, PaperRobot mistakenly extracts “prolog” as a related entity and generates an abstract “In this paper, we present a novel approach to the problem of extracting relationships among the prolog program. We present a system that uses a macromolecular binding relationships to extract the relationships between the abstracts of the entry. The results show that the system is able to extract the most important concepts in the prolog program.”. 4 Related Work Link Prediction. Translation-based approaches (Nickel et al., 2011; Bordes et al., 2013; Wang et al., 2014; Lin et al., 2015; Ji et al., 2015a) have been widely exploited for link prediction. Compared with these studies, we are the first to incorporate multi-head graph attention (Sukhbaatar et al., 2015; Madotto et al., 2018; Veliˇckovi´c et al., 2018) to encourage the model to capture multi-aspect relevance among nodes. Similar to (Wang and Li, 2016; Xu et al., 2017), we enrich entity representation by combining the contextual sentences that include the target entity and its neighbors from the graph structure. This is the first work to incorporate new idea creation via link prediction into automatic paper writing. Knowledge-driven Generation. Deep Neural Networks have been applied to generate natural language to describe structured knowledge bases (Duma and Klein, 2013; Konstas and Lapata, 2013; Flanigan et al., 2016; Hardy and Vlachos, 2018; Pourdamghani et al., 2016; Trisedya et al., 2018; Xu et al., 2018; Madotto et al., 2018; Nie et al., 2018), biographies based on attributes (Lebret et al., 2016; Chisholm et al., 2017; Liu et al., 2018; Sha et al., 2018; Kaffee et al., 2018; Wang et al., 2018a; Wiseman et al., 2018), and image/video captions based on background entities and events (Krishnamoorthy et al., 2013; Wu et al., 2018; Whitehead et al., 2018; Lu et al., 2018). To handle unknown words, we design an architecture similar to pointer-generator networks (See et al., 2017) and copy mechanism (Gu et al., 2016). Some interesting applications include generating abstracts based on titles for the natural language processing domain (Wang et al., 2018b), generating a poster (Qiang et al., 2016) or a science news blog title (Vadapalli et al., 2018) about a published paper. This is the first work on automatic writing of key paper elements for the biomedical domain, especially conclusion and future work, and follow-on paper titles. 5 Conclusions and Future Work We build a PaperRobot who can predict related entities for an input title and write some key elements of a new paper (abstract, conclusion and future work) and predict a new title. Automatic evaluations and human Turing tests both demonstrate her promising performance. PaperRobot is merely an assistant to help scientists speed up scientific discovery and production. Conducting experiments is beyond her scope, and each of her current components still requires human intervention: constructed knowledge graphs cannot cover all technical details, predicted new links need to be verified, and paper drafts need further editing. In the future, we plan to develop techniques for extracting entities of more fine-grained entity types, and extend PaperRobot to write related work, predict authors, their affiliations and publication venues. Acknowledgments The knowledge extraction and prediction components were supported by the U.S. NSF No. 1741634 and Tencent AI Lab Rhino-Bird Gift Fund. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. 1989 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 5th International Conference on Learning Representations. Rossana Berardi, Francesca Morgese, Azzurra Onofri, Paola Mazzanti, Mirco Pistelli, Zelmira Ballatore, Agnese Savini, Mariagrazia De Lisa, Miriam Caramanti, Silvia Rinaldi, et al. 2013. Role of maspin in cancer. Clinical and translational medicine. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in neural information processing systems. Andrew Chisholm, Will Radford, and Ben Hachey. 2017. Learning to generate one-sentence biographies from Wikidata. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Allan Peter Davis, Cynthia J Grondin, Robin J Johnson, Daniela Sciaky, Benjamin L King, Roy McMorran, Jolene Wiegers, Thomas C Wiegers, and Carolyn J Mattingly. 2016. The comparative toxicogenomics database: update 2017. Nucleic acids research. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the 9th Workshop on Statistical Machine Translation. Daniel Duma and Ewan Klein. 2013. Generating natural language from linked data: Unsupervised template extraction. In Proceedings of the 10th International Conference on Computational Semantics. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime Carbonell. 2016. Generation from abstract meaning representation using tree transducers. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Jacob G. Foster, Andrey Rzhetsky, and James A. Evans. 2015. Tradition and innovation in scientists research strategies. American Sociological Review. Mary Ellen Foster and Michael White. 2007. Avoiding repetition in generated text. In Proceedings of the 11th European Workshop on Natural Language Generation. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. In Proceedings of the 2015 IEEE International Joint Conference on Neural Networks. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Hardy Hardy and Andreas Vlachos. 2018. Guided neural language generation for abstractive summarization using Abstract Meaning Representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015a. Knowledge graph embedding via dynamic mapping matrix. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Ming Ji, Qi He, Jiawei Han, and Scott Spangler. 2015b. Mining strong relevance between heterogeneous entities from unstructured biomedical data. Data Mining and Knowledge Discovery, 29:976998. Lucie-Aim´ee Kaffee, Hady Elsahar, Pavlos Vougiouklis, Christophe Gravier, Frederique Laforest, Jonathon Hare, and Elena Simperl. 2018. Learning to generate Wikipedia summaries for underserved languages from Wikidata. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Ioannis Konstas and Mirella Lapata. 2013. A global model for concept-to-text generation. Journal of Artificial Intelligence Research. Niveda Krishnamoorthy, Girish Malkarnenkar, Raymond J Mooney, Kate Saenko, and Sergio Guadarrama. 2013. Generating natural-language video descriptions using text-mined knowledge. In Proceedings of the 27th AAAI Conference on Artificial Intelligence. R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 1990 Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Proceedings of Text Summarization Branches Out. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the 39th AAAI Conference on Artificial Intelligence. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text generation by structure-aware seq2seq learning. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Di Lu, Spencer Whitehead, Lifu Huang, Heng Ji, and Shih-Fu Chang. 2018. Entity-aware image caption generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Corey L. Neal, Veronica Henderson, Bethany N. Smith, Danielle McKeithen, Tisheeka Graham, Baohan T. Vo, and Valerie A. Odero-Marah. 2012. Snail transcription factor negatively regulates maspin tumor suppressor in human prostate cancer cells. BMC Cancer. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on Machine Learning. Feng Nie, Jinpeng Wang, Jin-Ge Yao, Rong Pan, and Chin-Yew Lin. 2018. Operation-guided neural networks for high fidelity data-to-text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Steven Pinker. 2014. Why academics stink at writing. The Chronicle of Higher Education. Nima Pourdamghani, Kevin Knight, and Ulf Hermjakob. 2016. Generating English from Abstract Meaning Representations. In Proceedings of the 9th International Natural Language Generation conference. Yuting Qiang, Yanwei Fu, Yanwen Guo, Zhi-Hua Zhou, and Leonid Sigal. 2016. Learning to generate posters of scientific papers. In Proceedings of the 30th AAAI Conference on Artificial Intelligence. Dragomir R. Radev, Pradeep Muthukrishnan, Vahed Qazvinian, and Amjad Abu-Jbara. 2013. The acl anthology network corpus. Language Resources and Evaluation, pages 1–26. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, and Zhifang Sui. 2018. Orderplanning neural text generation from structured data. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the Association for Machine Translation in the Americas. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems. Jun Suzuki and Masaaki Nagata. 2017. Cutting-off redundant repeating generations for neural abstractive summarization. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Bayu Distiawan Trisedya, Jianzhong Qi, Rui Zhang, and Wei Wang. 2018. GTR-LSTM: A triple encoder for sentence generation from RDF data. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 1991 Raghuram Vadapalli, Bakhtiyar Syed, Nishant Prabhu, Balaji Vasan Srinivasan, and Vasudeva Varma. 2018. When science journalism meets artificial intelligence: An interactive demonstration. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Richard Van Noorden. 2014. Scientists may be reaching a peak in reading habits. Nature. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph attention networks. Proceedings of the 8th International Conference on Learning Representations. Qingyun Wang, Xiaoman Pan, Lifu Huang, Boliang Zhang, Zhiying Jiang, Heng Ji, and Kevin Knight. 2018a. Describing a knowledge base. In Proceedings of the 11th International Conference on Natural Language Generation. Qingyun Wang, Zhihao Zhou, Lifu Huang, Spencer Whitehead, Boliang Zhang, Heng Ji, and Kevin Knight. 2018b. Paper abstract writing through editing mechanism. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the 28th AAAI Conference on Artificial Intelligence. Zhigang Wang and Juan-Zi Li. 2016. Text-enhanced representation learning for knowledge graph. In Proceedings of the 25th International Joint Conference on Artificial Intelligence. Chih-Hsuan Wei, Hung-Yu Kao, and Zhiyong Lu. 2013. PubTator: a web-based text mining tool for assisting biocuration. Nucleic acids research. Spencer Whitehead, Heng Ji, Mohit Bansal, Shih-Fu Chang, and Clare Voss. 2018. Incorporating background knowledge into video description generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2018. Learning neural templates for text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Qi Wu, Chunhua Shen, Peng Wang, Anthony Dick, and Anton van den Hengel. 2018. Image captioning and visual question answering based on attributes and external knowledge. In Proceedings of the 2018 IEEE transactions on pattern analysis and machine intelligence. Ziang Xie. 2017. Neural text generation: A practical guide. arXiv preprint arXiv:1711.09534. Jiacheng Xu, Kan Chen, Xipeng Qiu, and Xuanjing Huang. 2017. Knowledge graph representation with jointly structural and textual encoding. In Proceedings of the 26th International Joint Conference on Artificial Intelligence. Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, and Vadim Sheinin. 2018. SQL-to-text generation with graph-to-sequence model. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.
2019
191
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1992–2001 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 1992 Rhetorically Controlled Encoder-Decoder for Modern Chinese Poetry Generation Zhiqiang Liu†, Zuohui Fu‡∗, Jie Cao♦∗, Gerard de Melo‡, Yik-Cheung Tam†, Cheng Niu† and Jie Zhou† †Pattern Recognition Center, WeChat AI, Tencent Inc, China ‡Department of Computer Science, Rutgers University ♦School of Computing, University of Utah [email protected],[email protected],[email protected] [email protected],{wilsontam,niucheng,withtomzhou}@tencent.com Abstract Rhetoric is a vital element in modern poetry, and plays an essential role in improving its aesthetics. However, to date, it has not been considered in research on automatic poetry generation. In this paper, we propose a rhetorically controlled encoder-decoder for modern Chinese poetry generation. Our model relies on a continuous latent variable as a rhetoric controller to capture various rhetorical patterns in an encoder, and then incorporates rhetoricbased mixtures while generating modern Chinese poetry. For metaphor and personification, an automated evaluation shows that our model outperforms state-of-the-art baselines by a substantial margin, while a human evaluation shows that our model generates better poems than baseline methods in terms of fluency, coherence, meaningfulness, and rhetorical aesthetics. 1 Introduction Modern Chinese poetry, originating from 1900 CE, is one of the most important literary formats in Chinese culture and indeed has had a profound influence on the development of modern Chinese culture. Rhetoric is a vital element in modern poetry, and plays an important role in enhancing its aesthetics. Incorporating intentional rhetorical embellishments is essential to achieving the desired stylistic aspects of impassioned modern Chinese poetry. In particular, the use of metaphor and personification, both frequently used forms of rhetoric, are able to enrich the emotional impact of a poem. Specifically, a metaphor is a figure of speech that describes one concept in terms of another one. Within this paper, the term “metaphor” is considered in the sense of a general figure of ∗The work was done when Zuohui Fu and Jie Cao were interns at Pattern Recognition Center, WeChat AI, Tencent Inc. 独自 白云漫了太阳 青山环拥着正睡的时候 牛乳般雾露遮遮掩掩 像轻纱似的 幂了新嫁娘的面 (White clouds obscured the sun) (When the surrounding green hills are sleeping) (Milky fog and dew are partly hidden and partly visible) (Like a light yarn) (Cover the bride's face) (Alone) Personification Metaphor Figure 1: A modern Chinese poetry with metaphor and personification. speech 比喻(bi yu), encompassing both metaphor in its narrower sense and similes. Personification is a figure of speech in which a thing, an idea or an animal is given human attributes, i.e., nonhuman objects are portrayed in such a way that we feel they have the ability to act like human beings. For example, 她笑起来像花儿一样(’She smiles like lovely flowers’) with its connection between smiling and flowers highlights extraordinary beauty and pureness in describing the verb ’smile’. 夜空中的星星眨着眼睛(’Stars in the night sky squinting’) serves as an example of personification, as stars are personified and described as squinting, which is normally considered an act of humans, but here is invoked to more vividly describe twinkling stars. As is well known, rhetoric encompasses a variety of forms, including metaphor, personification, exaggeration, and parallelism. For our work, we collected more than 8,000 Chinese poems and over 50,000 Chinese song lyrics. Based on the statistics given in Table 1, we observe that metaphor and personification are the most frequently used rhetorical styles in modern Chinese poetry and lyrics (see Section 4.1 for details about this data). 1993 Dataset Docs Lines Metaphor Personification Poetry 8,744 137,105 31.4% 18.5% Lyrics 53,150 1,036,425 23.8% 13.2% Table 1: Quantitative evaluation of the phenomena of metaphor and personification in modern Chinese poems and lyrics. Hence, we will mainly focus on the generation of metaphor and personification in this work. As an example, an excerpt from the modern Chinese poem 独自(Alone) is given in Figure 1, where the fourth sentence (highlighted in blue) invokes a metaphorical simile, while the second one (highlighted in red) contains a personification. In recent years, neural generation models have become widespread in natural language processing (NLP), e.g., for response generation in dialogue (Le et al., 2018), answer or question generation in question answering, and headline generation in news systems. At the same time, poetry generation is of growing interest and has attained high levels of quality for classical Chinese poetry. Previously, Chinese poem composing research mainly focused on traditional Chinese poems. In light of the mostly short sentences and the metrical constraints of traditional Chinese poems, the majority of research attention focused on term selection to improve the thematic consistency (Wang et al., 2016). In contrast, modern Chinese poetry is more flexible and rich in rhetoric. Unlike sentimentcontrolled or topic-based text generation methods (Ghazvininejad et al., 2016), which have been widely used in poetry generation, existing research has largely disregarded the importance of rhetoric in poetry generation. Yet, to emulate humanwritten modern Chinese poems, it appears necessary to consider not only the topics but also the form of expression, especially with regard to rhetoric. In this paper, we propose a novel rhetorically controlled encoder-decoder framework inspired by the above sentiment-controlled and topic-based text generation methods, which can effectively generate poetry with metaphor and personification. Overall, the contributions of the paper are as follows: • We present the first work to generate modern Chinese poetry while controlling for the use of metaphor and personification, which play an essential role in enhancing the aesthetics of poetry. • We propose a novel metaphor and personification generation model with a rhetorically controlled encoder-decoder. • We conduct extensive experiments showing that our model outperforms the state-of-theart both in automated and human evaluations. 2 Related Work 2.1 Poetry Generation Poetry generation is a challenging task in NLP. Traditional methods (Gerv´as, 2001; Manurung, 2004; Greene et al., 2010; He et al., 2012) relied on grammar templates and custom semantic diagrams. In recent years, deep learning-driven methods have shown significant success in poetry generation, and topic-based poetry generation systems have been introduced (Ghazvininejad et al., 2017, 2018; Yi et al., 2018b). In particular, Zhang and Lapata (2014) propose to generate Chinese quatrains with Recurrent Neural Networks (RNNs), while Wang et al. (2016) obtain improved results by relying on a planning model for Chinese poetry generation. Recently, Memory Networks (Sukhbaatar et al., 2015) and Neural Turing Machines (Graves et al., 2014) have proven successful at certain tasks. The most relevant work for poetry generation is that of Zhang et al. (2017), which stores hundreds of human-authored poems in a static external memory to improve the generated quatrains and achieve a style transfer. The above models rely on an external memory to hold training data (i.e., external poems and articles). In contrast, Yi et al. (2018a) dynamically invoke a memory component by saving the writing history into memory. 2.2 Stylistic Language Generation The ability to produce diverse sentences in different styles under the same topics is an important characteristic of human writing. Some works have explored style control mechanisms for text generation tasks. For example, Zhou and Wang (2018) use naturally labeled emojis for large-scale emotional response generation in dialogue. Ke et al. (2018) and Wang et al. (2018) propose a sentence controlling function to generate interrogative, imperative, or declarative responses in dialogue. For the task of poetry generation, Yang et al. (2018) introduce an unsupervised style labeling to generate stylistic poetry, based on mutual information. Inspired by the above works, we regard rhetoric in 1994 poetry as a specific style and adopt a Conditional Variational Autoencoder (CVAE) model to generate rhetoric-aware poems. CVAEs (Sohn et al., 2015; Larsen et al., 2016) extend the traditional VAE model (Kingma and Welling, 2014) with an additional conditioned label to guide the generation process. Whereas VAEs essentially directly store latent attributes as probability distributions, CVAEs model latent variables conditioned on random variables. Recent research in dialogue generation shows that language generated by VAE models benefit from a significantly greater diversity in comparison with traditional Seq2Seq models. Recently, CVAEs and adversarial training have been explored for the task of generating classical Chinese poems (Li et al., 2018). 3 Methodology In this paper, our goal is to leverage metaphor and personification (known as rhetoric modes) in modern Chinese poetry generation using a dedicated rhetoric control mechanism. 3.1 Overview Before presenting our model, we first formalize our generation task. The inputs are poetry topics specified by K user-provided keywords {wk}K k=1. The desired output is a poem consisting of n lines {Li}n i=1. Since we adopt a sequence-to-sequence framework and generate a poem line by line, the task can be cast as a text generation one, requiring the repeated generation of an i-th line that is coherent in meaning and related to the topics, given the previous i −1 lines L1:i−1 and the topic keywords w1:K. In order to control the rhetoric modes, the rhetoric label r may be provided either as an input from the user, or from an automatic prediction based on the context. Hence, the task of poetry line generation can be formalized as follows: L∗ i = arg max L P(L | L1:i−1, w1:K, ri) (1) As mentioned above, incorporating rhetoric into poetic sentences requires controlling for the rhetoric mode and memorizing contextual topic information. To this end, we first propose two conditional variational autoencoder models to effectively control when to generate rhetoric sentences, and which rhetoric mode to use. The first model is a Manual Control CVAE model (MCCVAE). It receives the user’s input signal as a rhetoric label r to generate the current sentence in the poem, and is designed for user-controllable poetry generation tasks. The second model is the Automatic Control CVAE (ACCVAE), which automatically predicts when to apply appropriate forms of rhetoric and generates the current sentence based on contextual information. Subsequently, to memorize pertinent topic information and generate more coherent rhetorical sentences, we propose a topic memory component to store contextual topic information. At the same time, we propose a rhetorically controlled decoder to generate appropriate rhetorical sentences. This is a mechanism to learn the latent rhetorical distribution given a context and a word, and then perform a rhetorically controlled term selection during the decoding stage. Our proposed framework will later be presented in more detail in Figure 2. 3.2 Seq2seq Baseline Our model is based on the sequence-to-sequence (Seq2Seq) framework, which has been widely used in text generation. The encoder transforms the current input text X = {x1, x2, ..., xJ} into a hidden representation H = {h1, h2, ..., hJ}, as follows: hj = LSTM(e(xj), hj−1), (2) where LSTM is a Long Short-Term Memory Network, and e(xj) denotes the embedding of the word xj. The decoder first updates the hidden state S = {s1, s2, .., sT }, and then generates the next sequence Y = {y1, y2, ..., yT } as follows: st = LSTM(e(yt−1), st−1)) P(yt | yt−1, st) = softmax(Wst), (3) where this second LSTM does not share parameters with the encoder’s network. 3.3 Proposed Models In the following, we will describe our models for rhetorically controlled generation. 3.3.1 Manual Control (MC) CVAE We introduce a Conditional Variational Autoencoder (CVAE) for the task of poetry generation. Mathematically, the CVAE is trained by maximizing a variational lower bound on the conditional likelihood of Y given c, in accordance with p(Y | c) = Z p(Y | z, c) p(z | c) dz, (4) 1995 c Prior  network  Recognition  network  z z' r Encoder Encoder Predictor s1 s2 s3 Mixture [s;z;c] Content words Rhetoric words + o Rhetorically controlled decoder s c [z;c] r Current lines Next line Rhetoric label (r) Automatic Control Manual Control Topic memory network 0.6 0.4 Decoder Topic words Generated line  hX CVAE Figure 2: Illustration of our model. where z, c, and Y are random variables, and the latent variable z is used to encode the semantics and rhetoric of the generated sentence. In our manual control model, the conditional variables that capture the input information are c = [hX; e(r)], where e(r) is the embedding of the rhetorical variable r. hX is the encoding of current poem sentences X, and the target Y represents the next sentence to be generated. Then on top of the traditional Seq2seq model, we introduce a prior network, a recognition network, and the decoder: (i) The prior network pP(z|c) is an approximation of p(z|c). (ii) The decoder pD(Y |z, c) is used to approximate p(Y |z, c). (iii) The recognition network qR(z|Y, c) serves to approximate the true posterior p(z|Y, c). Then the variational lower bound to the loss −log p(Y |c) can be expressed as: −L(θD; θP; θR; Y, c) = LKL + LdecoderCE = KL(qR(z | Y, c) || pP(z | c)) −EqR(z|Y,c) (log pD(Y | z, c)) (5) Here, θD, θP, θR are the parameters of the decoder, prior network, and recognition network, respectively. Intuitively, the second term maximizes the sentence generation probability after sampling from the recognition network, while the first term minimizes the distance between prior and recognition network. Usually, we assume that both the prior and the recognition networks are multivariate Gaussian distributions, and their mean and log variance are estimated through multilayer perceptrons (MLP) as follows:  µ, σ2 = MLPposterior(LSTM(Y ), c) h µ ′, σ ′2i = MLPprior(c) (6) A single layer of the LSTM is used to encode the current lines, and obtain the hX component of c. The same LSTM structure is also used to encode the next line Y in the training stage. By using Eq. (6), we calculate the KL divergence between these distributions to optimize Eq. (5). Following the practice in Zhao et al. (2017), a reparameterization technique is used when sampling from the recognition and the prior network during training and testing. 3.3.2 Automatic Control(AC) CVAE In the ACCVAE model, we first predict the rhetorical mode of the next sentence using an MLP that is designed as follows: p(r|hX) = softmax(MLPpredictor(hX)) r = arg max p(r | hX) (7) In this case, the conditional variable c is also [hX; e(r)], where hX is taken as the last hidden state of the encoder LSTM. The loss function is then defined as: L = LKL + LdecoderCE + LpredictorCE (8) 1996 In this paper, a two-layer MLP is used for Eq. (7). 3.4 Topic Memory Component As shown above, LSTMs are used to encode the lines of the poem. Considering the fact that Memory Networks (Sukhbaatar et al., 2015) have demonstrated great power in capturing long temporal dependencies, we incorporate a memory component for the decoding stage. By equipping it with a larger memory capacity, the memory is able to retain temporally distant information in the writing history, and provide a RAM-like mechanism to support model execution. In our poetry generation model, we rely on a special topic memory component to memorize both the topic and the generation history, which are of great help in generating appropriate rhetorical and semantically consistent sentences. As illustrated in Figure 2, our topic memory is M ∈RK′×dh, where each row of the matrices is a memory slot with slot size dh and the number of slots is K′. Before generating the i-th line Li, topic words wk from the user and the input text are written into the topic memory in advance, which remains unchanged during the generation of a sentence. Memory Reading. We introduce an Addressing Function as α = A(M, q), which calculates the probabilities of each slot of the memory being selected and invoked. Specifically, we define: zk = bT σ(Mk, q) αk = softmax(zk), (9) where σ defines a non-linear layer, q is the query vector, b is the parameter, M is the memory to be addressed, Mk is the k-th slot of M, and αk is the k-th element in vector α. For the topic memory component, the input q should be [st−1; c; z], so the topic memory is read as follow: α′ = Ar(M, [st−1; c; z]) ot = K′ X k=1 α′ kMk, (10) where α′ is the reading probability vector, st−1 represents the decoder hidden state, and ot is the memory output at the t-th step. 3.5 Rhetorically Controlled Decoder A general Seq2seq model may tend to emit generic and meaningless sentences. In order to create poems with more meaningful and diverse rhetoric, we propose a rhetorically controlled decoder. It assumes that each word in a poem sentence has a latent type designating it as a content word or as a rhetorical word. The decoder then calculates a word type distribution over the latent types given the context, and computes type-specific generation distributions over the entire vocabulary. The final probability of generating a word is a mixture of type-specific generation distributions, where the coefficients are type probabilities. The final generation distribution P(yt | st, ot, z, c) from the sampled word is defined as P(yt | st, ot, z, c) = P(yt | τt = content, st, ot, z, c) P(τt = content | st, z, c) +P(yt | τt = rhetoric, st, z, c) P(τt = rhetoric | st, z, c), (11) where τt denotes the word type at time step t. This specifies that the final generation probability is a mixture of the type-specific generation probability P(yt | τt, st, z, c), weighted by the probability of the type distribution P(τt | st, z, c). We refer to this decoder as a rhetorically controlled decoder. The probability distribution over word types is given by P(τt | st, z, c) = softmax(W0[st; z; c] + b0), where st is the hidden state of the decoder at time step t, W ∈Rk×d with the dimension d. The word type distribution predictor can be trained in decoder training stage together. The type-specific generation distribution is given by P(yt | τt = content, st, ot, z, c) = softmax(Wcontent[st; ot; z; c] + bcontent) (12) P(yt | τt = rhetoric, st, z, c) = softmax(Wrhetoric[st; z; c] + brhetoric), (13) where Wcontent, Wrhetoric ∈R|V |×d, and |V | is the size of the entire vocabulary. Note that the type-specific generation distribution is parameterized by these matrices, indicating that the distribution for each word type has its own parameters. Instead of using a single distribution, our rhetorically controlled decoder enriches the model by applying multiple type-specific generation distributions, which enables the model to convey more information about the potential word to be generated. Also note that the generation distribution is over the same vocabulary. 1997 Model Precision Recall F1 Metaphor 0.93 0.92 0.92 Personification 0.69 0.62 0.65 Other 0.76 0.82 0.79 Table 2: Results of the rhetoric classifier on the test sets. 3.6 Overall Loss Function The CVAE and Seq2seq model with the rhetorically controlled decoder should be trained jointly. Therefore, the overall loss L is a linear combination of the KL term LKL, the classification loss of the rhetoric predictor cross entropy (CE) LpredictorCE, the generation loss of the rhetorical controlled decoder cross entropy LdecoderCE, and the word type classifier (word type distribution predictor) cross entropy Lword classifier: L = LKL + LdecoderCE+ Lword classifier + γLpredictorCE (14) The technique of KL cost annealing can address the optimization challenges of vanishing latent variables in this encoder-decoder architecture. γ is set to 0 if the Manual Control CVAE is used, and 1 otherwise. 4 Experiments 4.1 Datasets and Setups We conduct all experiments on two datasets1. One is a modern Chinese poetry dataset, while the other is a modern Chinese lyrics dataset. We collected the modern Chinese poetry dataset from an online poetry website2 and crawled about 100,000 Chinese song lyrics from a small set of online music websites. The sentence rhetoric label is required for our model training. To this end, we built a classifier to predict the rhetoric label automatically. We sampled about 15,000 sentences from the original poetry dataset and annotated the data manually with three categories, i.e., metaphor, personification, and other. This dataset was divided into a training set, validation set, and test set. Three classifiers, including LSTM, Bi-LSTM, and Bi-LSTM with a self-attention model, were trained on this dataset. The Bi-LSTM with self-attention classifier (Yang et al., 2016) outperforms the other models and achieves the best accuracy of 0.83 on the 1https://github.com/Lucien-qiang/Rhetoric-Generator 2http://www.shigeku.com/ test set. In this classifier, the sizes of word embedding, hidden state and the attention size are set to 128, 256, 30 respectively, and a two-layer LSTM is used. The results for different classes are given in Table 2. Additionally, we select a large number of poem sentences with metaphor and personification to collect the corresponding rhetorical words. Based on statistics of word counts and part of speech, we obtained over 500 popular words associated with metaphor and personification as rhetorical words. Our statistical results show that these words cover a wide range of metaphorical and anthropomorphic features. Meanwhile, in our entire model, the sizes of word embedding, rhetoric label embedding, hidden state are set to 128, 128, 128 respectively. The dimensionality of the latent variable is 256 and a single-layer decoder is used. The word embedding is initialized with word2vec vectors pre-trained on the whole corpus. 4.2 Models for Comparisons We also compare our model against previous stateof-the-art poetry generation models: • Seq2Seq: A sequence-to-sequence generation model, as has been successfully applied to text generation and neural machine translation (Vinyals and Le, 2015). • HRED: A hierarchical encoder-decoder model for text generation (Serban et al., 2016), which employs a hierarchical RNN to model the sentences at both the sentence level and the context level. • WM: A recent Working Memory model for poetry generation (Yi et al., 2018b). • CVAE: A standard CVAE model without the specific decoder. We adopt the same architecture as that introduced in Zhao et al. (2017). 4.3 Evaluation Design In order to obtain objective and realistic evaluation results, we rely on a combination of both machine evaluation and human evaluation. Automated Evaluation. To measure the effectiveness of the models automatically, we adopt several metrics widely used in existing studies. BLEU scores3 and Perplexity are used to quantify 3The BLEU score is calculated with the standard multibleu.perl script. 1998 Dataset Model BLEU(%) PPL Precision Recall Rhetoric-F1 Distinct-1 Distinct-2 Poetry Seq2seq 0.38 124.55 0.49 0.45 0.47 0.0315 0.0866 HRED 0.41 119.74 0.51 0.50 0.50 0.0347 0.0924 CVAE 0.44 108.72 0.62 0.61 0.61 0.0579 0.1775 WM 0.42 115.39 0.57 0.60 0.58 0.0498 0.1243 AC model (ours) 0.43 112.28 0.64 0.65 0.64 0.0607 0.1854 MC model (ours) 0.47 95.65 0.68 0.67 0.67 0.0595 0.1747 Lyrics Seq2seq 0.52 257.06 0.37 0.34 0.35 0.0149 0.0574 HRED 0.54 201.85 0.37 0.35 0.36 0.0193 0.0602 CVAE 0.59 147.45 0.40 0.41 0.41 0.0231 0.0655 WM 0.55 183.67 0.37 0.40 0.38 0.0216 0.0628 AC model (ours) 0.58 159.78 0.41 0.41 0.41 0.0325 0.0817 MC model (ours) 0.57 170.46 0.45 0.49 0.47 0.0273 0.0739 Table 3: Results of machine evaluation. PPL represents perplexity. Poetry Lyrics F C M RA F C M RA Seq2Seq 2.7 2.4 2.8 2.3 3.0 2.4 2.9 2.4 HRED 2.8 2.9 2.7 2.5 2.9 2.7 3.0 2.3 CVAE 3.2 2.7 3.0 3.1 3.3 2.6 2.9 2.9 WM 3.1 3.4 3.1 3.0 3.1 3.1 2.8 2.7 AC model (ours) 3.0 3.4 3.2 3.5 3.3 3.0 3.1 3.2 Table 4: The results of human evaluation. F means Fluency. C stands for Coherence. M represents Meaningfulness while RA represents Rhetorical Aesthetics. how well the models fit the data. The RhetoricF1 score is used to measure the rhetorically controlled accuracy of the generated poem sentences. Specifically, if the rhetoric label of the generated sentence is consistent with the ground truth, the generated result is right, and wrong otherwise. The rhetoric label of each poem sentence is predicted by our rhetoric classifier mentioned above (see 4.1 for details about this classifier). Distinct1/Distinct-2 (Li et al., 2016) is used to evaluate the diversity of the generated poems. Human Evaluation. Following previous work (Yi et al., 2018b), we consider four criteria for human evaluation: • Fluency: Whether the generated poem is grammatically correct and fluent. • Coherence: Whether the generated poem is coherent with the topics and contexts. • Meaningfulness: Whether the generated poem contains meaningful information. • Rhetorical Aesthetics: Whether the generated rhetorical poem has some poetic and artistic beauty. Each criterion is scored on a 5-point scale ranging from 1 to 5. To build a test set for human evaluation, we randomly select 200 sets of topic words to generate poems with the models. We invite 10 不管有多少风雨 我愿意为你 守护在青春岁月里 愿意为你 不要问我为何 (No matter how much wind and rain) (I'd like to do it for you) (Guard in youth) (Willing to anything for you) (Don't ask me why) 那些岁月里的美好时光 我们都在寻觅 你的心已变得陌生 爱变得不能相聚 我会在等你 (Good times in those years) (We're all looking for it) (Your heart has become unfamiliar) (Love becomes impossible to embrace) (I will be waiting for you) (a) Seq2seq model (b) WM model Figure 3: The results of the Seq2Seq and WM model. 青春有你有我的世界里 它像个孩子一样微笑甜蜜 我的故事写在那个岁月里 静静地睡去 但永远被铭记 (Youth is in your and my world) (It smiles like a child) (My story is written in those years) (Sleep quietly) (But be remembered forever) Figure 4: The result of the our model. experts4 to provide scores according to the above criteria and the average score for each criterion is computed. 4.4 Evaluation Results The results of the automated evaluation are given in Table 3. Our MC model obtains a higher BLEU score and lower perplexity than other baselines on the poetry dataset, which suggests that the model is on a par with other models in generating grammatical sentences. Note that our AC model obtains higher Distinct-1 and Distinct-2 scores because it tends to generate more diverse and informative results. In terms of the rhetoric generation accuracy, our model outperforms all the baselines and achieves 4The experts are Chinese literature students or members of a poetry association. 1999 Rhetoric Type Examples Metaphor Input: 光明和暗影交替在你脸面,忽闪出淡红的悠远和蓝色的幽深 (Light and shadows interlace in your face, flashing pale reddish distances and blue depths) Topic Words: 恋爱;光明;脸面(Love; Light; Face) Output:你的眼 眼 眼神 神 神像 像 像我心灵的花朵一样绽放 (Your eyes blossom like flowers in my heart) Personification Input: 下一次。下一次?改变它,像镜子的客观 (Another time. Another time? Change it, like the objectivity of a mirror) Topic Words: 灵魂;镜子;客观(Soul; Mirror; Objectivity) Output:它们慢慢地走 走 走来 来 来 (They walked slowly) Other Input: 我的话还一句没有出口,蜜蜂的好梦却每天不同 (My words have not spoken, but the bees’ dreams are different every day) Topic Words: 春天;蜜蜂;梦(Spring; Bees; Dreams) Output:我埋怨你的何时才会说完 (I blame you, when will I finish) Table 5: The result of the rhetoric control. the best Rhetoric-F1 score of 0.67 on the poetry dataset, which suggests that our model can control the rhetoric generation substantially more effectively. The other baselines have low scores because they do not possess any direct way to control for rhetoric. Instead, they attempt to learn it automatically from the data, but do not succeed at this particularly well. Table 4 provides the results of the human evaluation. We observe that on both datasets, our method achieves the best results in terms of the Meaningfulness and Rhetorical Aesthetics metrics. Additionally, we find that the WM model has higher scores in the Coherence metric over the two datasets, indicating that the memory component has an important effect on the coherence and relevance of the topics. The CVAE model obtains the best results in terms of the Fluency metric, which shows that this model can generate more fluent sentences, but it lacks coherence and meaningfulness. Overall, our model generates poems better than other baselines in terms of fluency, coherence, meaningfulness, and rhetorical aesthetics. In particular, these results show that a rhetorically controlled encoder-decoder can generate reasonable metaphor and personification in poems. 4.5 Case Study Table 5 presents example poems generated by our model. These also clearly show that our model can control the rhetoric-specific generation. In Case 1, our model is able to follow the topics 恋爱;脸面 (love, face) and the metaphor label when generating the sentence, e.g., 你的眼神像心灵的花朵 一样绽放(Your eyes blossom like flowers in my heart). In Case 2, our model obtaining the personification signal is able to generate a personification word 走来(walk). As an additional case study, we also randomly select a set of topic words {青春Youth, 爱情 Love, 岁月Years} and present three five-line poems generated by Seq2Seq, WM, and our model, respectively, with the same topics and automatically controlled rhetoric. All the poems generated by the different models according to the same topic words are presented in Figures 3 and 4. The poem generated by our model is more diverse and aesthetically pleasing with its use of metaphor and personification, while the two other poems focus more on the topical relevance. 5 Conclusion and Future work In this paper, we propose a rhetorically controlled encoder-decoder for modern Chinese poetry generation. Our model utilizes a continuous latent variable to capture various rhetorical patterns that govern the expected rhetorical modes and introduces rhetoric-based mixtures for generation. Experiments show that our model outperforms state-of-the-art approaches and that our model can effectively generate poetry with convincing metaphor and personification. In the future, we will investigate the possibility of incorporating additional forms of rhetoric, such as parallelism and exaggeration, to further enhance the model and generate more diverse poems. 2000 References Pablo Gerv´as. 2001. An expert system for the composition of formal spanish poetry. In Applications and Innovations in Intelligent Systems VIII, pages 19–32. Springer. Marjan Ghazvininejad, Yejin Choi, and Kevin Knight. 2018. Neural poetry translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 67–71. Marjan Ghazvininejad, Xing Shi, Yejin Choi, and Kevin Knight. 2016. Generating topical poetry. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1183–1191. Marjan Ghazvininejad, Xing Shi, Jay Priyadarshi, and Kevin Knight. 2017. Hafez: an interactive poetry generation system. Proceedings of ACL 2017, System Demonstrations, pages 43–48. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. Erica Greene, Tugba Bodrumlu, and Kevin Knight. 2010. Automatic analysis of rhythmic poetry with applications to generation and translation. In Proceedings of the 2010 conference on empirical methods in natural language processing, pages 524–533. Jing He, Ming Zhou, and Long Jiang. 2012. Generating chinese classical poems with statistical machine translation models. In Twenty-Sixth AAAI Conference on Artificial Intelligence. Pei Ke, Jian Guan, Minlie Huang, and Xiaoyan Zhu. 2018. Generating informative responses with controlled sentence function. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1499–1508. Diederik P Kingma and Max Welling. 2014. Autoencoding variational bayes. stat, 1050:10. Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. 2016. Autoencoding beyond pixels using a learned similarity metric. In International Conference on Machine Learning, pages 1558–1566. Hung Le, Truyen Tran, Thin Nguyen, and Svetha Venkatesh. 2018. Variational memory encoderdecoder. In Advances in Neural Information Processing Systems, pages 1515–1525. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. Juntao Li, Yan Song, Haisong Zhang, Dongmin Chen, Shuming Shi, Dongyan Zhao, and Rui Yan. 2018. Generating classical chinese poems via conditional variational autoencoder and adversarial training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3890–3900. Hisar Manurung. 2004. An evolutionary algorithm approach to poetry generation. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Thirtieth AAAI Conference on Artificial Intelligence. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In Advances in neural information processing systems, pages 3483– 3491. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Oriol Vinyals and Quoc V Le. 2015. A neural conversational model. Yansen Wang, Chenyi Liu, Minlie Huang, and Liqiang Nie. 2018. Learning to ask questions in opendomain conversational systems with typed decoders. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2193–2203. Zhe Wang, Wei He, Hua Wu, Haiyang Wu, Wei Li, Haifeng Wang, and Enhong Chen. 2016. Chinese poetry generation with planning based neural network. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1051–1060. Cheng Yang, Maosong Sun, Xiaoyuan Yi, and Wenhao Li. 2018. Stylistic chinese poetry generation via unsupervised style disentanglement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3960–3969. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. Xiaoyuan Yi, Ruoyu Li, and Maosong Sun. 2018a. Chinese poetry generation with a salient-clue mechanism. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 241–250. 2001 Xiaoyuan Yi, Maosong Sun, Ruoyu Li, and Zonghan Yang. 2018b. Chinese poetry generation with a working memory model. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, page 4553‘‘4559. Jiyuan Zhang, Yang Feng, Dong Wang, Yang Wang, Andrew Abel, Shiyue Zhang, and Andi Zhang. 2017. Flexible and creative chinese poetry generation using neural memory. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1364–1373. Xingxing Zhang and Mirella Lapata. 2014. Chinese poetry generation with recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 670–680. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 654–664. Xianda Zhou and William Yang Wang. 2018. Mojitalk: Generating emotional responses at scale. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1128–1137.
2019
192
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2002–2012 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2002 Enhancing Topic-to-Essay Generation with External Commonsense Knowledge Pengcheng Yang1,2∗, Lei Li3∗, Fuli Luo2, Tianyu Liu2, Xu Sun1,2 1Deep Learning Lab, Beijing Institute of Big Data Research, Peking University 2MOE Key Lab of Computational Linguistics, School of EECS, Peking University 3School of Computer Science and Technology, Xidian University {yang pc, luofuli, tianyu0421, xusun}@pku.edu.cn, [email protected] Abstract Automatic topic-to-essay generation is a challenging task since it requires generating novel, diverse, and topic-consistent paragraph-level text with a set of topics as input. Previous work tends to perform essay generation based solely on the given topics while ignoring massive commonsense knowledge. However, this commonsense knowledge provides additional background information, which can help to generate essays that are more novel and diverse. Towards filling this gap, we propose to integrate commonsense from the external knowledge base into the generator through dynamic memory mechanism. Besides, the adversarial training based on a multi-label discriminator is employed to further improve topic-consistency. We also develop a series of automatic evaluation metrics to comprehensively assess the quality of the generated essay. Experiments show that with external commonsense knowledge and adversarial training, the generated essays are more novel, diverse, and topic-consistent than existing methods in terms of both automatic and human evaluation. 1 Introduction Automatic topic-to-essay generation (TEG) aims at generating novel, diverse, and topic-consistent paragraph-level text given a set of topics. It not only has plenty of practical applications, e.g., benefiting intelligent education or assisting in keyword-based news writing (Lepp¨anen et al., 2017), but also serves as an ideal testbed for controllable text generation (Wang and Wan, 2018). Despite its wide applications described above, the progress in the TEG task lags behind other generation tasks such as machine translation (Bahdanau et al., 2014) or text summarization (Rush et al., 2015). Feng et al. (2018) are the first to propose the TEG task and they utilize coverage vector ∗Equal Contribution.                             Figure 1: Toy illustration of the information volume on three different text generation tasks, which shows that the source information is extremely insufficient compared to the target output on the TEG task. to incorporate topic information for essay generation. However, the model performance is not satisfactory. The generated essays not only lack novelty and diversity, but also suffer from poor topicconsistency. One main reason is that the source information is extremely insufficient compared to the target output on the TEG task. We summarize the comparison of information flow between the TEG task and other generation tasks in Figure 1. In machine translation and text summarization, the source input provides enough semantic information to generate the desired target text. However, the TEG task aims to generate paragraph-level text based solely on several given topics. Extremely insufficient source information is likely to make the generated essays of low quality, both in terms of novelty and topic-consistency. In this paper, in order to enrich the source information of the TEG task, we elaborately devise a memory-augmented neural model to incorporate commonsense knowledge effectively. The motivation is that the commonsense from the external knowledge base can provide additional background information, which is of great help to im2003 Output Essay: Our life is a movement, a journey, an adventure towards a goal. Are you nearer to your port of goal today than you were yesterday? Since your ship was first sailed upon the sea of life, you have never been still for a single moment. The sea is too deep, you could not find an anchor if you would, and there can be no pause until you come into port. Commonsense Knowledge Input Topics port life sea port ship [LocateAt] sea [UsedFor] sail a ship sea deep [Property ] life adventure journey [IsA] life [IsA] goal life [HasA] Input Memory Figure 2: Incorporate commonsense knowledge into topic-to-essay generation via the dynamic memory mechanism. The dashed line indicates that the memory is dynamically updated. prove the quality of the generated essay. Figure 2 intuitively shows an example. For the given topic “life”, some closely related concepts (e.g. “adventure”, “journey”, “goal”) are connected as a graph structure in ConceptNet1. These related concepts are an important part of the skeleton of the essay, which provides additional key information for the generation. Therefore, such external commonsense knowledge can contribute to generating essays that are more novel and diverse. More specifically, this commonsense knowledge is integrated into the generator through the dynamic memory mechanism. In the decoding phase, the model can attend to the most informative memory concepts for each word. At the same time, the memory matrix is dynamically updated to incorporate information of the generated text. This interaction between the memory and the generated text can contribute to the coherent transition of topics. To enhance the topic-consistency, we adopt adversarial training based on a multi-label discriminator. The discriminative signal can comprehensively evaluate the coverage of the output on the given topics, making the generated essays more closely surround the semantics of all input topics. The main contributions of this paper are summarized as follows: • We propose a memory-augmented neural model with adversarial training to integrate external commonsense knowledge into topicto-essay generation. • We develop a series of automatic evaluation metrics to comprehensively assess the quality of the generated essay. • Experiments show that our approach can outperform existing methods by a large margin. With the help of commonsense knowledge and adversarial training, the generated essays are more novel, diverse, and topic-consistent. 1A large-scale commonsense knowledge base. Label Distribution Multi-Label Discriminator MemoryAugmented Generator Our life is a movement, a journey ...... Generated Essay . . . . . Topics Generated Text Binary Cross Entropy Loss Reward-Based Objective Figure 3: The sketch of our proposed model and adversarial training. 2 Proposed Model Given a topic sequence x containing m topics, the TEG task aims to generate a topic-consistent essay y containing n words, where n is much larger than m. Figure 3 presents a sketch of our model and training process. The proposed model consists of a memory-augmented generator and a multi-label discriminator. We adopt adversarial training to alternately train the generator and the discriminator. 2.1 Memory-Augmented Generator The memory-augmented generator Gθ is responsible for generating the desired essay y conditioned on the input topics x. Figure 4 illustrates the overview of Gθ, which consists of an encoder and a decoder with the memory mechanism. Encoder: Here we implement the encoder as an LSTM (Hochreiter and Schmidhuber, 1997) model, which aims to integrate topic information. It reads the input topic sequence x from both directions and computes hidden states for each topic, −→h i = −−−−→ LSTM(−→h i−1, e(xi)) (1) ←−h i = ←−−−− LSTM(←−h i+1, e(xi)) (2) where e(xi) is embedding of xi. The final hidden representation of the i-th topic is hi = [−→h i; ←−h i], where semicolon represents vector concatenation. Decoder: External commonsense knowledge can enrich the source information, which helps 2004                                                        Figure 4: The overview of our memory-augmented generator Gθ. At time-step t, the decoder attends to the concept memory and topic representations to generate a new word. In addition, the memory matrix is dynamically updated via the adaptive gate mechanism. generate essays that are more novel and diverse. Therefore, we equip the decoder with a memory mechanism to effectively incorporate commonsense knowledge from ConceptNet. ConceptNet is a semantic network which consists of triples R = (h; r; t) meaning that head concept h has the relation r with tail concept t. Since the commonsense knowledge of each topic can be represented by its neighboring concepts in the knowledge base, we use each topic as the query to retrieve k neighboring concepts. The pre-trained embeddings of these concepts are stored as commonsense knowledge in a memory matrix M0 ∈Rd×mk, where d is the dimension of the embedding vector.2 In the decoding phase, the generator Gθ refers to the memory matrix for text generation. Specially, the hidden state st of the decoder at time-step t is: st = LSTM  st−1, [e(yt−1); ct; mt]  (3) where [e(yt−1); ct; mt] means the concatenation of vectors e(yt−1), ct, and mt. yt−1 is the word generated at time-step t −1. ct is the context vector that is computed by integrating the hidden representations of the input topic sequence, et,i = f(st−1, hi) (4) αt,i = exp(et,i) m j=1 exp(et,j) (5) ct = m  i=1 αt,ihi (6) 2In practice, the number of columns in M0 is fixed to K. Supposing there are m input topics, then each topic is assigned [K/m] concepts. For special cases where the concept is insufficient, the pre-trained word2vec embeddings are used as an alternative. where f(st−1, hi) is an aligned model (Bahdanau et al., 2014), which measures the dependency between st−1 and hi. mt in Eq. (3) is the memory vector extracted from Mt, which aims to encode the commonsense knowledge to assist in essay generation. Inspired by Sukhbaatar et al. (2015), we use the attention mechanism to find the rows in Mt that are most relevant to the output. Formally, vt = tanh(Wst−1 + b) (7) qt = softmax(vT t Mt) (8) mt =  i qi tMi t (9) where W and b are weight parameters. Mi t is the i-th column of Mt and qi t is the i-th value of qt. Dynamic Memory: As the generation progresses, the topic information that needs to be expressed keeps changing, which requires the memory matrix to be dynamically updated. In addition, the dynamic memory mechanism enables the interaction between the memory and the generated text, which contributes to the coherent transition of topics in the generated essay. Concretely, for each memory entry Mi t in Mt, we first compute a candidate update memory  Mi t,  Mi t = tanh  U1Mi t + V1e(yt)  (10) where U1 and V1 are trainable parameters. Inspired by Highway network (Srivastava et al., 2015), we adopt the adaptive gate mechanism to determine how much the i-th memory entry should be updated, gi t = sigmoid  U2Mi t + V2e(yt)  (11) 2005 Algorithm 1 Adversarial training algorithm. Require: the memory-augmented generator Gθ; multi-label discriminator Dφ; the training corpus S = {(x, y)} 1: Initialize Gθ, Dφ with random weights θ, φ. 2: Pre-train Gθ using MLE on S 3: Generate negative samples using Gθ 4: Pre-train Dφ via minimizing Eq. (18) 5: repeat 6: for g-steps do 7: Generate a sequence y = (y1, . . . , yn) ∼Gθ 8: for t in 1 : (n −1) do 9: Compute r(y1:t, yt+1) by Eq. (16) 10: end for 11: Calculate the gradient ∇θJ(θ) by Eq. (15) 12: Update generator parameters 13: end for 14: for d-steps do 15: Generate negative examples using Gθ 16: Train discriminator Dφ via minimizing Eq. (18) 17: end for 18: until Converges where U2 and V2 are learnable parameters. Mi t is eventually updated to Mi t+1 =  1 −gi t  ⊙Mi t + gi t ⊙ Mi t (12) where 1 refers to the vector with all elements 1 and ⊙denotes pointwise multiplication. 2.2 Multi-Label Discriminator The discriminator Dφ is introduced to evaluate topic-consistency between the input topics and the generated essay, which further improves the text quality. Since the source input contains a variable number of topics, here we implement Dφ as a multi-label classifier to distinguish between the real text with several topics and the generated text. In detail, suppose there are a total of |X| topics, the discriminator produces a sigmoid probability distribution over (|X| + 1) classes. The score at the i-th (i ∈{1, · · · , |X|}) index represents the probability that it belongs to the real text with the i-th topic, and the score at the (|X| + 1)-th index represents the probability that the sample is the generated text. Here we implement the discriminator Dφ as a CNN (Kim, 2014) binary classifier. 2.3 Adversarial Training Inspired by SeqGAN (Yu et al., 2017), here we adopt the adversarial training. We train the memory-augmented generator Gθ via policy gradient method (Williams, 1992). Our generator Gθ can be viewed as an agent, whose state at timestep t is the current generated words y1:t−1 = (y1, · · · , yt−1) and the action is the prediction of the next word yt. Once the reward r(y1:t−1, yt) based on both state y1:t−1 and action yt is observed, the training objective of the generator Gθ is to minimize the negative expected reward, J(θ) = −Ey∼Gθ[r(y)] (13) = − n−1  t=1 Gθ(yt+1|y1:t) · r(y1:t, yt+1) (14) where Gθ(yt+1|yt) means the probability that selects the word yt+1 based on the previous generated words. Applying the likelihood ratios trick and sampling method, we can build an unbiased estimation for the gradient of J(θ), ∇θJ(θ) ≈− n−1  t=1  ∇θlogGθ(yt+1|y1:t) · r(y1:t, yt+1)  (15) where yt+1 is the sampled word. Since the discriminator can only evaluate a complete sequence, here Monte Carlo Search with roll-out policy Gθ is applied to sample the unknown n−t words. The final reward function is computed as : r(y1:t−1, yt) = ⎧ ⎪ ⎨ ⎪ ⎩ 1 N N i=1 D(yn 1:t) t < n D(y1:n) t = n (16) where N is the number of searches, yn 1:t is the sampled complete sequence based on the roll-out policy Gθ and state y1:t, and D(y) is defined as: D(y) = 1 m m  i=1 Dφ(xi|y) (17) where Dφ(xi|y) denotes the probability predicted by Dφ that the completed sequence y belongs to topic xi. D(y) can be treated as a measure of the coverage of the input topics by the output. A high D(y) requires the generated essay to closely surround the semantics of all input topic words. The discriminator is trained to predict all true topics by minimizing binary cross entropy loss3, J(φ) = − |X|+1  i=1  xilogDφ(xi|y) + (1 −xi)log  1 −Dφ(xi|y)  (18) We alternately train the generator Gθ and the discriminator Dφ. An overview of the training process is summarized in Algorithm 1. 3When calculating binary cross entropy loss, we convert x into (|X| + 1)-dimensional sparse vector. 2006 3 Experiments In this section, we introduce the dataset, evaluation metrics, all baselines, and settings in detail. 3.1 Datasets We conduct experiments on the ZHIHU corpus (Feng et al., 2018). It consists of Chinese essays whose length is between 50 and 100. We select topic words based on the frequency and remove the rare topic words. The total number of labels are set to 100. Sizes of the training set and the test set are 27,000 and 2500. For tuning hyperparameters, we set aside 10% of training samples as the validation set. 3.2 Settings We tune hyper-parameters on the validation set. We use the 200-dim pre-trained word embeddings provided by Song et al. (2018). The vocabulary size is 50,000 and batch size is 64. We use a single layer of LSTM with hidden size 512 for both encoder and decoder. We pre-train our model for 80 epochs with the MLE method. The optimizer is Adam (Kingma and Ba, 2014) with 10−3 learning rate for pre-training and 10−5 for adversarial training. Besides, we make use of the dropout method (Srivastava et al., 2014) to avoid overfitting and clip the gradients (Pascanu et al., 2013) to the maximum norm of 10. 3.3 Baselines We adopt the following competitive baselines: SC-LSTM (Wen et al., 2015) uses gating mechanism to control the flow of topic information. PNN (Wang et al., 2016) applies planning based neural network to generate topic-consistent text. MTA (Feng et al., 2018) utilizes coverage vectors to integrate topic information. Their work also includes: TAV representing topic semantics as the average of all topic embeddings and TAT applying attention mechanism to select the relevant topics. CVAE (Yang et al., 2018b) presents a conditional variational auto-encoder with a hybrid decoder to learn topic via latent variables. Plan&Write (Yao et al., 2018) proposes a planand-write framework with two planning strategies to improve diversity and coherence. 3.4 Evaluation Metrics In this paper, we adopt two evaluation methods: automatic evaluation and human evaluation. 3.4.1 Automatic Evaluation The automatic evaluation of TEG remains an open and tricky question since the output is highly flexible. Previous work (Feng et al., 2018) only adopts BLEU (Papineni et al., 2002) score based on ngram overlap to perform evaluation. However, it is unreasonable to only use BLEU for evaluation because TEG is an extremely flexible task. There are multiple ideal essays for a set of input topics. To remedy this, here we develop a series of evaluation metrics to comprehensively measure the quality of output from various aspects. Consistency: An ideal essay should closely surround the semantics of all input topics. Therefore, we pre-train a multi-label classifier to evaluate topic-consistency of the output. Given the input topics x, we define the topic-consistency of the generated essay ˆy as: Consistency(ˆy|x) = ϕ(x, ˆx) (19) where ϕ is Jaccard similarity function and ˆx is topics predicted by a pre-trained multi-label classifier. Here we adopt the SGM model proposed in Yang et al. (2018a) to implement the pre-trained multi-label classifier. Novelty: The novelty of the output can be reflected by the difference between it and the training texts. We calculate the novelty of each generated essay ˆy as: Novelty(ˆy|x) =1 −max{ϕ(ˆy, y0)| (x0, y0) ∈Cx} (20) where ϕ is Jaccard similarity function and Cx is composed of training samples whose corresponding labels are similar to x. Formally, Cx = {(x0, y0)|ϕ(x, x0) > τ} (21) where τ is the set threshold. Diversity: We also calculate the proportion of distinct n-grams in the generated essays to evaluate the diversity of the outputs. In addition, the BLEU scores of different systems are also reported for reference. 3.4.2 Human Evaluation We also perform human evaluation to more accurately evaluate the quality of the generated essays. Each item contains the input topics and outputs of different models. Then, 200 items are distributed to 3 annotators, who have no knowledge 2007 Methods BLEU Consistency Novelty Dist-1 Dist-2 SC-LSTM 5.73 1.98 66.51 0.20 0.69 PNN 5.91 11.25 59.52 1.73 6.92 TAV 6.05 16.59 70.32 2.69 14.25 TAT 6.32 9.19 68.77 2.25 12.17 MTA 7.09 25.73 70.68 2.24 11.70 CVAE 7.46 34.84* 71.28 3.72* 17.92* Plan&Write 8.69* 32.91 72.17* 2.74 14.29 Proposal 9.72 39.42 75.71 5.19 20.49 Impv-Best 11.85% 13.15% 4.91% 39.52% 14.34% Table 1: Results of automatic evaluation. Dist-n evaluates the diversity of the output. The best performance is highlighted in bold and “*” indicates the best result achieved by the baselines. in advance about which model the generated essays come from. Then, they are required to score the generated essay from 1 to 5 in terms of four criteria: novelty, diversity, coherence, and topicconsistency. For novelty, we use the TF-IDF feature to retrieve 10 most similar training samples to provide references for the annotators. 4 Results and Discussion In this section, we report the experimental results. Besides, further analysis is also provided. 4.1 Experimental Results The automatic evaluation results are shown in Table 1. Results show that our approach achieves the best performance in all metrics. For instance, the proposed model achieves 11.85% relative improvement over the best baseline on BLEU score. It demonstrates the effectiveness of our approach in improving the quality of the generated essay. More importantly, in terms of novelty, diversity, and topic-consistency, our model can substantially outperform all baselines. Table 2 presents the human evaluation results, from which we can draw similar conclusions. It is obvious that our approach can outperform the baselines by a large margin, especially in terms of diversity and topic-consistency. For example, the proposed model achieves improvements of 15.33% diversity score and 12.28% consistency score over the best baseline. The main reason for this increase in diversity is that we integrate commonsense knowledge into the generator through the memory mechanism. This external commonsense knowledge provides additional background information, making the generated essays more novel and diverse. In addition, the adversarial training is employed to increase the coverage of Methods Consistency Novelty Diversity Coherence SC-LSTM 1.67 2.04 1.39 1.16 PNN 2.52 1.96 1.95 2.84 MTA 3.17 2.56 2.43 3.28 CVAE 3.42* 2.87* 2.74* 2.63 Plan&Write 3.27 2.81 2.56 3.36* Proposal 3.84 3.24 3.16 3.61 Impv-Best 12.28% 12.89% 15.33% 7.44% Correlation 0.83 0.66 0.68 0.72 Table 2: Results of human evaluation. The best performance is highlighted in bold and “*” indicates the best result achieved by baselines. We calculate the Pearson correlation to show the inter-annotator agreement. the output on the target topics, which further enhances the topic-consistency. 4.2 Ablation Study To understand the importance of key components of our approach, here we perform an ablation study by training multiple ablated versions of our model: without adversarial training, without memory mechanism, and without dynamic update. Table 3 and Table 4 present the automatic and human evaluation results of the ablation study, respectively. Results show that all three ablation operations will result in a decrease in model performance. This indicates that both adversarial training and dynamic memory mechanism can contribute to improving the quality of the output. However, an interesting finding is that the adversarial training and memory mechanism focus on improving different aspects of the model. Memory mechanism We find that the memory mechanism can significantly improve the novelty and diversity. As is shown in Table 3 and Table 4, compared to the removal of the adversarial training, the model exhibits larger degradation in terms 2008 Visualization of Memory Attention I am a student major in in finance and I study economics. I am not a freshman. I I have no special skills. I want to know what can I do to enrich my knowledge and plan my future. I do not want to work work work after graduation. Is there other choices, except for looking for a job job? I hope you can give me some advice, thank you very very very much! Output Essay: Input Topics: Finance Career “know” “knowledge” “economics” “major” “finance” “plan” “hope” “choice” “job” “skills” “special” Word Index Concepts of “Finance” Concepts of “Career” “economics” “going to work” “occupation” “work” A C B Figure 5: Overview of memory attention during generation. The original Chinese output is translated into English. Methods BLEU Consistency Novelty Dist-1 Dist-2 Full Model 9.72 39.42 75.71 5.19 20.49 w/o Adversarial Training 7.74 31.74 74.13 5.22 20.43 w/o Memory 8.40 33.95 71.86 4.16 17.59 w/o Dynamic 8.46 36.18 73.62 4.18 18.49 Table 3: Automatic evaluations of ablation study. “w/o Dynamic” means that we use static memory mechanism. Methods Consistency Novelty Diversity Coherence Full model 3.84 3.24 3.16 3.61 w/o Adversarial 3.31 3.07 3.14 3.43 w/o Memory 3.53 2.73 2.77 3.19 w/o Dynamic 3.62 2.91 2.95 3.37 Table 4: Human evaluations of ablation study. of novelty and diversity when the memory mechanism is removed. This shows that with the help of external commonsense knowledge, the source information can be enriched, leading to the outputs that are more novel and diverse. Adversarial training Another conclusion is that adversarial training can better benefit the model to enhance the topic-consistency of the generated essay compared to memory mechanism. In detail, Table 4 shows that the consistency score given by humans for ablated versions without adversarial training and memory mechanism decline 0.53 and 0.31, respectively. The reason is that the discriminative signal in training not only evaluates the quality of the generated text, but also models its degree of association with the input topics, thus enhancing the topic-consistency. 4.3 Validity of Memory Module Here we visualize the attention weights in Eq. (9) to provide a more comprehensive understanding of the memory module. Figure 5 shows an overview of the heatmap of the memory attention weights throughout the process of essay generation. The attention of coarse-grained topics According to Figure 5, in the early stage of decoding (word index 0 to 30), the generated words focus on the topic “finance”. In this case, the generator pays more attention to concepts related to “finance” (area A in the heatmap). As the generation turns more focus on the topic “career”, some concepts related to “career” (area C in the heatmap) are assigned larger attention weights. This indicates that our approach can automatically select the most informative concepts based on the topic being focused by the generated text. The attention of fine-grained words Figure 5 also shows that even focusing on the same topic, our model can finely select the most relevant concepts based on the generated word. For example, when the model generates the word “finance” or “economics”, it pays the most attention to the concept “economics”. This further demonstrates that the memory module can provide external commonsense knowledge, which does a great favor to the generation of high-quality text. Coherent transition between topics The dynamic memory can also enhance the coherence of the generated essay. For instance, in the output 2009 Input Topics: Mother, Childhood SC-LSTM: My hometown is a beautiful city, where the scenery is so beautiful! My hometown has a long history, and the history is so wonderful. Its beauty is beautiful. It is a kind of beauty. It is a kind of beauty. PNN: Childhood is a song. Childhood is a moving song, which carries our laughter. Many, many stories of childhood, childhood memories, just like the stars. Let our childhood be full of happiness. Let us play together, play together. MTA: The mother’s love is selfless. It is my mother who gave me life. My mother gave precious life. Let me try to do my best to repay the mother’s love. I love my mother. My mother is a great mother! CVAE: My mother is a great. She is very great and she loves me very much. She has given a lot to me. I must love my mother, love my mother in the future. Plan&Write: My mother is very beautiful. She loves me very much. I am very happy with her. I have a good childhood. My happy childhood. I have a good time and let us play together. Proposal: My childhood is a happy family. My mother watches TV at home. I do my homework with my mother. My mother likes to read books, and I am a big fan of books. Table 5: Essays generated by different systems. We have translated the original Chinese output into English. essay in Figure 5, “I want to know what can I do to enrich my knowledge and plan my future” is a transition sentence from the topic “finance” to the topic “career”. When generating this sentence, the concepts of both topics (area B in the heatmap) receive a certain degree of attention. This illustrates that the dynamic interaction between the memory and the generated text makes the transition between topics more smooth, thus improving the coherence of the output. 4.4 Case Study Table 5 presents the output of different systems with “mother” and “childhood” as input topics. As shown in Table 5, the baselines tend to generate low-quality essays. For instance, the output of SC-LSTM and PNN contains massive duplicate phrases. Neither MTA nor CVAE can express information about topic “childhood”. Although Plan&Write can embody information about both topics, its output is relatively incoherent and less informative. Besides, for the output of these baselines, there exist similar samples in the training set. This indicates that they suffer from poor novelty. Although these baselines strive to incorporate topic information in their unique ways, it is difficult to develop a coherent topic-line based solely on several input topics. This limitation leads to poor coherence and topic-consistency. In contrast, the proposed model succeeds in generating novel high-quality text that closely surrounds the semantics of all input topics. The reason is that our approach can integrate commonsense knowledge into the generator through dynamic memory mechanism. With these additional background information, our model is able to make full expansion to generate the novel and coherent essay. Besides, adversarial training based on the multi-label discriminator further improves the quality of the output and enhances topic-consistency. 5 Related Work Automatic topic-to-essay generation (TEG) aims to compose novel, diverse, and topic-consistent paragraph-level text for several given topics. Feng et al. (2018) are the first to propose the TEG task and they utilize coverage vector to integrate topic information. However, the performance is unsatisfactory, showing that more effective model architecture needs to be explored, which is also the original intention of our work. A similar topic-to-sequence learning task is Chinese poetry generation. Early work adopts rule and template based methods (Tosa et al., 2008; Yan et al., 2013). When involving in neural networks, both Zhang and Lapata (2014) and Wang et al. (2016) employ recurrent neural network and planning to perform generation. Yan (2016) further propose a new generative model with a polishing schema. To balance linguistic accordance and aesthetic innovation, Zhang et al. (2017) adopt memory network to choose each term from reserved inventories. Yang et al. (2018b) and Li et al. (2018) further utilize conditional variational autoencoder to learn topic information. Yi et al. (2018) simultaneously train two generators via mutual reinforcement learning. However, different from poetry generation presenting obvious structured rules, the TEG task requires generating a long unstructured plain text. Such unstructured target output tends to result in the topic drift problem, bringing severe challenges to the TEG task. 2010 Another similar task is story generation, which aims to generate a story based on the short description of an event. Jain et al. (2017) employ statistical machine translation to explore story generation while Lewis et al. (2018) propose a hierarchical strategy. Xu et al. (2018) utilize reinforcement learning to extract a skeleton of the story to promote the coherence. To improve the diversity and coherence, Yao et al. (2018) present a planand-write framework with two planning strategies to fully leverage storyline. However, story generation and the TEG task focus on different goals. The former focuses on logical reasoning and aims to generate a coherent story with plots, while the latter strives to generate the essay with aesthetics based on the input topics. Besides, the source information of the TEG task is more insufficient, putting higher demands on the model. 6 Conclusion This work presents a memory-augmented neural model with adversarial training for automatic topic-to-essay generation. The proposed model integrates commonsense from the external knowledge base into the generator through a dynamic memory mechanism to enrich the source information. In addition, the adversarial training based on a multi-label discriminator is employed to further enhance topic-consistency. A series of evaluation metrics are also developed to comprehensively assess the quality of the generated essays. Extensive experimental results show that the proposed method can outperform competitive baselines by a large margin. Further analysis demonstrates that with external commonsense knowledge and adversarial training, the generated essays are more novel, diverse, and topic-consistent. Acknowledgement We thank the anonymous reviewers for their thoughtful comments. This work was supported in part by National Natural Science Foundation of China (No. 61673028). Xu Sun is the corresponding author of this paper. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. 3rd International Conference on Learning Representations. Xiaocheng Feng, Ming Liu, Jiahao Liu, Bing Qin, Yibo Sun, and Ting Liu. 2018. Topic-to-essay generation with neural networks. In Proceedings of the TwentySeventh International Joint Conference on Artificial Intelligence, pages 4078–4084. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, pages 2672–2680. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Parag Jain, Priyanka Agrawal, Abhijit Mishra, Mohak Sukhwani, Anirban Laha, and Karthik Sankaranarayanan. 2017. Story generation from sequence of independent short descriptions. arXiv preprint arXiv:1707.05501. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1746–1751. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Leo Lepp¨anen, Myriam Munezero, Mark GranrothWilding, and Hannu Toivonen. 2017. Data-driven news generation for automated journalism. In Proceedings of the 10th International Conference on Natural Language Generation, pages 188–197. Mike Lewis, Yann Dauphin, and Angela Fan. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 889–898. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. Juntao Li, Yan Song, Haisong Zhang, Dongmin Chen, Shuming Shi, Dongyan Zhao, and Rui Yan. 2018. Generating classical chinese poems via conditional variational autoencoder and adversarial training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3890–3900. Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, and Xu Sun. 2019. A dual reinforcement learning framework for unsupervised text style transfer. arXiv preprint arXiv:1905.10060. 2011 Fuli Luo, Tianyu Liu, Qiaolin Xia, Baobao Chang, and Zhifang Sui. 2018. Incorporating glosses into neural word sense disambiguation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers, pages 2473–2482. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics,, pages 311–318. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, pages 1310–1318. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389. Yan Song, Shuming Shi, Jing Li, and Haisong Zhang. 2018. Directional skip-gram: Explicitly distinguishing left and right context for word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 175–180. Robert Speer and Catherine Havasi. 2012. Representing general relational knowledge in conceptnet 5. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, pages 3679–3686. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. arXiv preprint arXiv:1505.00387. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, pages 2440–2448. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, pages 3104–3112. Richard S. Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems 12: Annual Conference on Neural Information Processing Systems 1999, pages 1057–1063. Naoko Tosa, Hideto Obara, and Michihiko Minoh. 2008. Hitch haiku: An interactive supporting system for composing haiku poem. In Entertainment Computing - ICEC 2008, 7th International Conference, volume 5309, pages 209–216. Ke Wang and Xiaojun Wan. 2018. Sentigan: Generating sentimental texts via mixture adversarial networks. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, pages 4446–4452. Zhe Wang, Wei He, Hua Wu, Haiyang Wu, Wei Li, Haifeng Wang, and Enhong Chen. 2016. Chinese poetry generation with planning based neural network. In 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, pages 1051–1060. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Peihao Su, David Vandyke, and Steve J. Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711–1721. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256. Jingjing Xu, Xuancheng Ren, Yi Zhang, Qi Zeng, Xiaoyan Cai, and Xu Sun. 2018. A skeleton-based model for promoting coherence among sentences in narrative story generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4306–4315. Rui Yan. 2016. i, poet: Automatic poetry composition through recurrent neural networks with iterative polishing schema. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pages 2238–2244. Rui Yan, Han Jiang, Mirella Lapata, Shou-De Lin, Xueqiang Lv, and Xiaoming Li. 2013. i, poet: Automatic chinese poetry composition through a generative summarization framework under constrained optimization. In Proceedings of the 23rd International Joint Conference on Artificial Intelligence, pages 2197–2203. Pengcheng Yang, Xu Sun, Wei Li, Shuming Ma, Wei Wu, and Houfeng Wang. 2018a. SGM: sequence generation model for multi-label classification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3915–3926. Xiaopeng Yang, Xiaowen Lin, Shunda Suo, and Ming Li. 2018b. Generating thematic chinese poetry using conditional variational autoencoders with hybrid decoders. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, pages 4539–4545. 2012 Lili Yao, Nanyun Peng, Weischedel Ralph, Kevin Knight, Dongyan Zhao, and Rui Yan. 2018. Planand-write: Towards better automatic storytelling. arXiv preprint arXiv:1811.05701. Xiaoyuan Yi, Maosong Sun, Ruoyu Li, and Wenhao Li. 2018. Automatic poetry generation with mutual reinforcement learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3143–3153. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 2852–2858. Jiyuan Zhang, Yang Feng, Dong Wang, Yang Wang, Andrew Abel, Shiyue Zhang, and Andi Zhang. 2017. Flexible and creative chinese poetry generation using neural memory. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1364–1373. Xingxing Zhang and Mirella Lapata. 2014. Chinese poetry generation with recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 670–680. Yi Zhang, Jingjing Xu, Pengcheng Yang, and Xu Sun. 2018. Learning sentiment memories for sentiment modification without parallel data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1103–1108.
2019
193
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2013–2022 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2013 Towards Fine-grained Text Sentiment Transfer Fuli Luo1, Peng Li2, Pengcheng Yang1, Jie Zhou2, Yutong Tan3, Baobao Chang1,4, Zhifang Sui1,4, Xu Sun1 1Key Lab of Computational Linguistics, Peking University 2Pattern Recognition Center, WeChat AI, Tencent Inc, China 3Computer Science and Technology, Beijing Normal University 4Peng Cheng Laboratory, China [email protected], [email protected], yang [email protected], [email protected], [email protected], {chbb,szf,xusun}@pku.edu.cn Abstract In this paper, we focus on the task of finegrained text sentiment transfer (FGST). This task aims to revise an input sequence to satisfy a given sentiment intensity, while preserving the original semantic content. Different from conventional sentiment transfer task that only reverses the sentiment polarity (positive/negative) of text, the FTST task requires more nuanced and fine-grained control of sentiment. To remedy this, we propose a novel Seq2SentiSeq model. Specifically, the numeric sentiment intensity value is incorporated into the decoder via a Gaussian kernel layer to finely control the sentiment intensity of the output. Moreover, to tackle the problem of lacking parallel data, we propose a cycle reinforcement learning algorithm to guide the model training. In this framework, the elaborately designed rewards can balance both sentiment transformation and content preservation, while not requiring any ground truth output. Experimental results show that our approach can outperform existing methods by a large margin in both automatic evaluation and human evaluation. Our code and data, including outputs of all baselines and our model are available at https://github.com/luofuli/ Fine-grained-Sentiment-Transfer. 1 1 Introduction Text sentiment transfer aims to rephrase the input to satisfy a given sentiment label (value) while preserving its original semantic content. It facilitates various NLP applications, such as automatically converting the attitude of review and fighting against offensive language in social media (dos Santos et al., 2018). Previous work (Shen et al., 2017; Li et al., 2018; Luo et al., 2019) on text sentiment transfer mainly focuses on the coarse-grained level: the reversal of 1Joint work between WeChat AI and Peking University. Input Sentence Tasty food and wonderful service. Target Sentiment Output Sentence 0.1 Horrible food and terrible service! 0.3 Plain food, slow service. 0.5 Food and service need improvement. 0.7 Good food and service. 0.9 Amazing food and perfect service!! Tar Senti 0 0 0 0 0 Input S Figure 1: An example of the input and output of the fine-grained text sentiment transfer task. The output reviews describe the same content (e.g. food/service) as the input while expressing different sentiment intensity. positive and negative sentiment polarity. They are confined to scenarios where there are two discrete sentiment labels. To achieve more nuanced and precise sentiment control of text generation, we turn to fine-grained text sentiment transfer (FTST) which revises a sequence to satisfy a given sentiment intensity2, while keeping the semantic content unchanged. Taking Figure 1 as an example, given the same input and five sentiment intensity values ranging from 0 (most negative) to 1 (most positive), the system generates five different outputs that satisfy the corresponding sentiment intensity in a relative order. There are two main challenges of FTST task. First, it is tough to achieve fine-grained control of the sentiment intensity when generating sentence. Previous work about coarse-grained text sentiment transfer usually uses a separate decoder for each sentiment label (Xu et al., 2018; Zhang et al., 2018b) or embeds each sentiment label into a separate vector (Fu et al., 2018; Li et al., 2018). However, these methods are not feasible for fine-grained text sentiment transfer since the 2The sentiment intensity is a real-valued score between 0 and 1, following sentiment intensity prediction task in sentiment analysis (Zhang et al., 2017; Mohammad et al., 2018). 2014 target sentiment intensity value is a real value, other than discrete labels. Second, parallel data3 is unavailable in practice. In other words, we can only access the corpora which are labeled with fine-grained sentiment ratings or intensity values. Therefore, in the FTST task, we can not train a generative model via ground truth outputs. To tackle the two challenges mentioned above, we propose two corresponding solutions. First, in order to control the sentiment intensity of the generated sentence, we propose a novel sentiment intensity controlled sequence-to-sequence (Seq2Seq) model Seq2SentiSeq. It incorporates the sentiment intensity value into the conventional Seq2Seq model via a Gaussian kernel layer. By this means, the model can encourage the generation of words whose sentiment intensity closer to the given intensity value during decoding. Second, due to the lack of parallel data, we can not directly train the proposed model via MLE (maximum likelihood estimation). Therefore, we propose a cycle reinforcement learning algorithm to guide the model training without any parallel data. The designed reward can balance both sentiment transformation and content preservation, while not requiring any ground truth output. Evaluation of the FTST task is also challenging and complex. In order to build a reliable automatic evaluation, we collect human references for FTST task on the Yelp review dataset4 via crowdsourcing and design a series of automatic metrics. The main contributions of this work are summarized as follows: • We propose a sentiment intensity controlled generative model Seq2SentiSeq, in which a sentiment intensity value is introduced via a Gaussian kernel layer to achieve fine-grained sentiment control of the generated sentence. • In order to adapt to non-parallel data, we design a cycle reinforcement learning algorithm CycleRL to guide the model training in an unsupervised way. • Experiments show that the proposed approach can largely outperform state-of-theart systems in both automatic evaluation and human evaluation. 3Parallel data in this paper denotes the corpus where each pair of sentences describes the same content while expressing the different sentiment intensity. 4https://www.yelp.com/dataset 2 Proposed Model 2.1 Task Definition Given an input sequence x and a target sentiment intensity value vy, the FTST task aims to generate a sequence y which not only expresses the target sentiment intensity vy, but also preserve the original semantic content of the input x. Without loss of generality, we limit the sentiment intensity value vy ranging from 0 (most negative) to 1 (most positive). 2.2 Seq2SentiSeq: Sentiment Intensity Controlled Seq2Seq Model Figure 2 presents a sketch of the proposed Seq2SentiSeq model. The model is based on the encoder-decoder framework, which takes a source text x as the input and outputs a target sentence y with the given sentiment intensity vy. In order to control the sentiment intensity of y, we introduce a Gaussian kernel layer into the decoder. 2.2.1 Encoder We use a bidirectional RNN as the encoder to capture source content information. Each word in the source sequence x = (x1, · · · , xm) is firstly represented by its semantic representation mapped by semantic embedding Ec. The RNN reads the semantic representations from both directions and computes the forward hidden states {−→h i}m i=1 and backward hidden states {←−h i}m i=1 for each word. We obtain the final hidden representation of the ith word by concatenating the hidden states from both directions hi = [−→h i; ←−h i]. 2.2.2 Decoder Given the hidden representations {hi}m i=1 of the input sequence x and the target sentiment intensity value vy, the decoder aims to generate a sequence y which not only describes the same content as the input sequence x, but also expresses a close sentiment intensity to vy. In order to achieve the aim of controlling sentiment during decoding, we firstly embedded each word with an additional sentiment representation, besides the original semantic representation. The semantic representation characterizes the semantic content of the word, while the sentiment representation characterizes its sentiment intensity. Formally, the hidden state st of the decoder at timestep t is computed as follows: st = f st−1, [Ec(yt−1); Es(yt−1)] , ct  (1) 2015 ℎ! ℎ" ℎ# … Attention Layer Encoder "! "" "$ Decoder "$ #% #& Target Sentiment Intensity Value '! = 0.9 Gaussian Kernel Layer <eos> Semantic Embeddings Sentiment Embeddings ", $$ $$ %&(()) + ($ The food is okay %,(()) "0.9 The food is good ", extremely The food is not %& (, ($ +$ % ($ +$ &(($) sum Encoder Decoder .( '! Thumbnail View Figure 2: The proposed sequence to sentiment controlled sequence (Seq2SentiSeq) model. where Es(yt−1) refers to the sentiment representation of the word yt−1 mapped by the sentiment embedding matrix Es, Ec(yt−1) is the semantic representation, and the context vector ct is computed by an attention mechanism in the same way as Luong et al. (2015). Considering two goals of the FTST task: sentiment transformation and content preservation, we model the final generation probability into a mixture of semantic probability and sentiment probability, where the former evaluates content preservation and the latter measures sentiment transformation. Similar to the traditional Seq2Seq model (Bahdanau et al., 2014), the semantic probability distribution over the whole vocabulary is computed as follows: pc t = softmax(Wcst) (2) where Wc is a trainable weight matrix. The sentiment probability measures how close the sentiment intensity of the generated sequence to the target vy. Normally, each word has a specific sentiment intensity. For example, the word “okay” has a positive intensity around 0.6, “good” is around 0.7, and “great” is around 0.8. However, when involving to the previous generated words, the sentiment intensity of current generated word may be totally different. For example, the phrase “not good” has a negative intensity around 0.3, while “extremely good” is around 0.9. That is to say, the sentiment intensity of each word at time-step t should be decided by both the sentiment representation Es and the current decoder state st. Therefore, we define a sentiment intensity prediction function g(Es, st) as follows: g(Es, st) = sigmoid(EsWsst) (3) where Ws is a trainable parameter, and sigmoid is used to scale the predicted intensity value to [0, 1]. Intuitively, in order to achieve fine-grained control of sentiment, words whose sentiment intensities are closer to the target sentiment intensity value vy should be assigned a higher probability. Take Figure 2 as an example, at the 5-th time-step, word “good” should be assigned a higher probability than word “bad”, thus the predicted intensity value g(“good”, s4) is closer to the target sentiment intensity than g(“bad”, s4). To favor words whose sentiment intensity is near vy, we introduce a Gaussian kernel layer which places a Gaussian distribution centered around vy, inspired by Luong et al. (2015) and Zhang et al. (2018a). Specifically, the sentiment probability is formulated as: os t = 1 √ 2πσexp − g(Es, st) −vy 2 2σ2 ! (4) ps t = softmax(os t) (5) where σ is the standard deviation. To balance both sentiment transformation and content preservation, the final probability distribution pt over the entire vocabulary is defined as a mixture of two probability distributions: pt = γps t + (1 −γ)pc t (6) where γ is the hyper-parameter that controls the trade-off between two generation probabilities. 2016 Encoder Decoder ! "# Encoder Decoder $% &' () Sentiment Scorer $* &) Figure 3: Cycle reinforcement learning. Note that the upper encoder-decoder model and the lower encoderdecoder are just one Seq2SentiSeq model. 2.3 Training: Cycle Reinforcement Learning A serious challenge of the FTST task is the lack of parallel data. Since the ground truth output y is unobserved, we can not directly use the maximum likelihood estimation (MLE) for training. To remedy this, we design a cycle reinforcement learning (CycleRL) algorithm. An overview of the training process is summarized in Algorithm 1. Two rewards are designed to encourage changing sentiment but preserving content, without the need of parallel data. The definitions of the two rewards and the corresponding gradients for Seq2SentiSeq model S are introduced as follows. 2.3.1 Reward Design We design the respective rewards for two goals (sentiment transformation and content preservation) of the FTST task. Then, an overall reward r is calculated to balance these two goals and guide the model training. Reward for sentiment transformation. A pretrained sentiment scorer is used to evaluate how well the sampled sentence ˆy matches the target sentiment intensity value vy. Specifically, the reward for sentiment transformation is formulated as: rs = 1/(|vy −ϕ(ˆy)| + 1) (7) where ϕ refers to the pre-trained sentiment scorer which is implemented as LSTM-based linear regression model. Reward for content preservation. Intuitively, if the model performs well in content preservation, it is easy to back-reconstruct the source input x. Therefore, we design the reward for content preservation to be the probability of the model reconstructing x based on the generated text ˆy and the source sentiment intensity value vx. rc = p(x|ˆy, vx; θ) (8) where θ is the parameter of Seq2SentiSeq model. Algorithm 1 The cycle reinforcement learning algorithm for training Seq2SentiSeq. Input: A corpora D = {(xi,i )} where each sequence xi is labeled with a fine-grained sentiment label vi 1: Initial the pseudo-parallel data V0 = {(xi, ˆyi)} 2: Pre-train Seq2SentiSeq model Sθ using V0 3: for each iteration t = 1, 2, ..., T do 4: Sample a sentence x from D 5: for k = 1, 2, ..., K do 6: Sample a intensity value v(k) y from interval [0, 1] 7: Generate a target sequence: ˆy(k) = S(x, v(k) y ; θ) 8: Compute sentiment reward r(k) s based on Eq. 7 9: Compute content reward r(k) c based on Eq. 8 10: Compute total reward r(k) based on Eq. 9 11: end for 12: Update θ using reward {r(k)}K k=1 based on Eq. 11 13: Update θ using cycle reconstruction loss in Eq. 12 14: end for Overall reward. To encourage the model to improve both sentiment transformation and content preservation, the final reward r guiding the model training is designed to be the harmonic mean of the above two rewards: r = 1 + β2 rc · rs (β2 · rc) + rs (9) where β is a harmonic weight that controls the trade-off between two rewards. 2.3.2 Optimization The goal of RL training is to minimize the negative expected reward, L(θ) = − X k r(k)pθ(ˆy(k)|x) (10) where ˆy(k) is the k-th sampled sequence according to probability distribution p in Eq. 6, r(k) is the reward of ˆy(k), and θ is the parameter of the proposed model in Figure 2. By means of policy gradient method (Williams, 1992), for each training example, the expected gradient of Eq. 10 can be approximated as: ∇θL(θ) ≃−1 K K X k=1 r(k) −b  ∇θlog pθ(ˆy(k))  (11) where K is the sample size and b is the greedy search decoding baseline that aims to reduce the variance of gradient estimate which is implemented in the same way as Paulus et al. (2017). Nevertheless, RL training strives to optimize a specific metric which may not guarantee the fluency of the generated text (Paulus et al., 2017), and 2017 usually faces the unstable training problems (Li et al., 2017). The most direct way is to expose the sentences which are from the training corpus to the decoder and trained via MLE (also called teacher-forcing). In order to expose the decoder to the original sentence from the training corpus, we borrow ideas from back-translation (Lample et al., 2018a,b). Specifically, the model first generates a sequence ˆy based on the input text x and the target sentiment intensity value vy, and then reconstructs the source input x based on ˆy and the source sentiment intensity value vx. Therefore, the gradient of the cycle reconstruction loss is defined as: ∇θJ (θ) = ∇θlog  p x|S(x, vy; θ), vx; θ  (12) where S refers to the Seq2SeniSeq model. Finally, we alternately update the model parameters θ based on Eq. 11 and Eq. 12. 3 Experimental Setup In this section, we introduce the dataset, experiment settings, baselines, and evaluation metrics. 3.1 Dataset We conduct experiments on the Yelp dataset5, which consists of a large number of product reviews. Each review is assigned a sentiment rating ranging from 1 to 5. Since the label inconsistency between human is more serious in fine-grained ratings, we average the ratings for the sentences which have a Jaccard Similarity more than 0.9. Then, averaged ratings are normalized between 0 and 1 as the sentiment intensity. Other data preprocessing is the same as Shen et al. (2017). Finally, we obtain a total of 640K sentences. We randomly hold 630K for training, 10K for validation, and 500 for testing. Even though the sentiment intensity distribution of training dataset is not uniform, the proposed framework consists of a uniform data augmentation which generates sentences whose intensity is from interval [0, 1] with a step of 0.05 to guide the model training (Step 6 in Algorithm 1). 3.2 Experiment Settings We tune hyper-parameters on the validation set. The size of vocabulary is set to 10K. Both the semantic and sentiment embeddings are 300dimensional and are learned from scratch. We 5https://www.yelp.com/dataset implement both encoder and decoder as a 1-layer LSTM with a hidden size of 256, and the former is bidirectional. The batch size is 64. We pre-train our model for 10 epochs with the MLE loss using pseudo-parallel sentences conducted by Jaccard Similarity, which is same as Liao et al. (2018). Harmonic weight β in Eq. 9 is 1 and γ in Eq. 6 is 0.5. The standard deviation σ is set to 0.01 for yielding suitable peaked distributions. The sample size K in Eq. 11 is set to 16. The optimizer is Adam (Kingma and Ba, 2014) with 10−3 initial learning rate for pre-training and 10−5 for cycleRL training. Dropout (Srivastava et al., 2014) is used to avoid overfitting. 3.3 Baselines We compare our proposed method with the following two series of state-of-the-art systems. Fine-grained systems aim to modify an input sentence to satisfy a given sentiment intensity. Liao et al. (2018) construct pseudo-parallel corpus to train a model which is a combination of a revised-VAE and a coupling component modeling pseudo-parallel data with three extra losses Lextra. What’s more, we also consider SCSeq2Seq (Zhang et al., 2018a) which is a specificity controlled Seq2Seq model proposed in dialogue generation. In order to adapt to this unsupervised task, the proposed CycleRL training algorithm is used to train the SC-Seq2Seq model. Coarse-grained systems aim to reverse the sentiment polarity (positive/negative) of the input, which can be regarded as a special case where the sentiment intensity is set below average (negative) or above average (positive). We compare our proposed method with the following state-of-the-art systems: CrossAlign (Shen et al., 2017), MultiDecoder (Fu et al., 2018), DeleteRetrieve (Li et al., 2018) and Unpaired (Xu et al., 2018). 3.4 Evaluation Metrics We adopt both automatic and human evaluation. 3.4.1 Automatic Evaluation Automatic evaluation of FTST is an open and challenging issue, thereby we adopt a combination of multiple evaluation methods. Content: To evaluate the content preservation performance, we hired crowd-workers on CrowdFlower6 to write human references.7 For each 6https://www.crowdflower.com/ 7We will release the collected human references and the 2018 Model Automatic Evaluation Human Evaluation BLEU-1↑ BLEU-2↑ MAE↓ MRRR↑ PPL↓ Content↑ Sentiment↑ Fluency↑ Avg↑ Revised-VAE 22.6 7.2 0.24 0.62 102.2 2.64 2.52 2.13 2.43 Revised-VAE + Lextra 20.7 5.7 0.18 0.67 102.6 2.54 3.84 2.14 2.84 SC-Seq2Seq 23.9 3.8 0.25 0.69 41.2 2.37 3.85 3.41 3.21 Seq2SentiSeq 32.5 10.3 0.13 0.78 35.1 3.62 4.09 4.17 3.96 Human Reference 100.0 100.0 0.07 0.83 31.2 4.51 4.36 4.75 4.54 Table 1: Automatic evaluation and human evaluation in three aspects: Content (BLUE-1, BLUE-2), Sentiment (MAE, MRRR) and Fluency (PPL). Avg shows the average human scores. ↑denotes larger is better, and vice versa. Bold denotes the best results. review in the test dataset, crowd-workers are required to write five references with sentiment intensity value from V ′ = [0.1, 0.3, 0.5, 0.7, 0.9]. Therefore, the BLEU (Papineni et al., 2002) score between the human reference and the corresponding generated text of the same sentiment intensity can evaluate the content preservation performance. Fluency: To measure the fluency, we calculate the perplexity (PPL) of each generated sequence via a pre-trained bi-directional LSTM language model (Mousa and Schuller, 2017). Sentiment: In order to measure how close the sentiment intensity of outputs to the target intensity values, we define three metrics. Given an input sentence x and a list of target intensity values V = [v1, v2, ..., vN], the corresponding outputs of the model are [ˆy1, ˆy2, ..., ˆyN]. We then use a pre-trained sentiment regression scorer to predict the sentiment intensity values of outputs as ˆV = [ˆv1, ˆv2, ..., ˆvN]. Following Liao et al. (2018), we use the mean absolute error (MAE = 1 N PN i=1 |vi −ˆvi|) between V and ˆV to measure the absolute gap. Moreover, for fine-grained text sentiment transfer task, we expect that given a higher sentiment intensity value, the model will generate a more positive sentence. That is to say, the relative intensity ranking of all generated sentences of the same input is also important. Inspired by the Mean Reciprocal Rank metric which is widely used in the Information Retrieval area, we design a Mean Relative Reciprocal Rank (MRRR) metric to measure the relative ranking MRRR = 1 N N X i=1 1 |rank(vi) −rank(ˆvi)| + 1 (13) In addition, we also compare our model with the coarse-grained sentiment transfer systems. In order to make the results comparable, we define the generated test samples of all baselines for reproducibility. sentiment intensity larger/smaller than 0.5 as positive/negative results. Then we use a pre-trained binary TextCNN classifier (Kim, 2014) to compute the classification accuracy. 3.4.2 Human Evaluation We also perform human evaluation to assess the quality of generated sentences more accurately. Each item contains the source input, the sampled target sentiment intensity value, and the output of different systems. Then 500 items are distributed to 3 evaluators, who are required to score the generated sentences from 1 to 5 based on the input and target sentiment intensity value in terms of three criteria: content, sentiment, fluency. Content evaluates the content preservation degree. Sentiment refers to how much the output matches the target sentiment intensity. Fluency is designed to measure whether the generated texts are fluent. For each metric, the average Pearson correlation coefficient of the scores given by three evaluators is greater than 0.71, which ensures the interevaluator agreement. 4 Results and Discussion 4.1 Evaluation Results The automatic evaluation and human evaluation results are shown in Table 1. It shows that our approach achieves the best performance in all metrics. More specifically, we have the following observations: (1) The proposed model Seq2SentiSeq obtains 8.6/3.1/0.98 points absolute improvement over the best results on BLEU-1/BLEU-2/Content score. It demonstrates the effectiveness of our approach in improving the content preservation of the input sentences. (2) Our model can more precisely control the sentiment intensity from human scores on sentiment, and it can also obtain both best results in sentiment mean absolute error (MAE) and relative sentiment rank (MRRR). 2019 Model Automatic Evaluation Human Evaluation BLEU-1↑ BLEU-2↑ MAE↓ MRRR↑ PPL↓Content↑Sentiment↑Fluency↑ Avg↑ Full Model 32.5 10.3 0.13 0.78 35.1 3.62 4.09 4.17 3.96 w/o Pre-training 14.3 0.7 0.32 0.48 7.2 1.01 1.30 3.86 2.06 w/o Cycle reconstruction 16.5 2.3 0.31 0.41 70.1 1.92 1.48 3.16 2.19 w/o Reinforcement learning 25.7 4.1 0.22 0.63 46.0 2.69 3.74 3.80 3.41 Table 2: Automatic evaluation and human evaluation of ablation study. Model neg-to-pos pos-to-neg Multidecoder 54.3 50.2 CrossAlign 73.3 71.7 Unpaired 78.9 73.0 DeleteRetrieve 89.6 83.1 Revised-VAE 64.3 62.0 Revised-VAE + Lextra 89.3 77.9 SC-Seq2Seq 67.2 59.6 Seq2SentiSeq 89.4 83.5 Table 3: Binary sentiment classification accuracy of the coarse-grained (upper) and fine-grained (lower) text sentiment transfer systems. Bold denotes the best results of each task. However, SC-Seq2Seq gets the second best MAE score while Revised-VAE + Lextra gets the second best MRRR score. We can infer that the two models excel at different aspects. And MRRR provides a different perspective on the sentiment results. (3) The proposed model can generate more fluent sentences than all baselines. The main reason for these three phenomenons is that we design two rewards that can directly ensure the content preservation and sentiment transformation in the cycle reinforcement training process. In addition, the cycle reconstruction loss can effectively guarantee the fluency of generated sentences, which has been further verified in the ablation study. What’s more, we also simplify our task to the setting of coarse-grained (positive/negative) sentiment transfer task. Table 3 shows the binary sentiment accuracy of the representative systems. We can find that the proposed model achieve the best results over the fine-grained systems, and it is comparable to the best coarse-grained system. 4.2 Ablation Study In this section, we further discuss the impacts of the components of the proposed model. We retrain our model by ablating multiple components of our model: without pre-training, without cycle reconstruction (Eq. 12), without reinforcement learning ( Eq. 11). Table 2 shows the corresponding automatic and human evaluations. The perforInput the beer isn’t bad, but the food was less than desirable. Output Seq2SentiSeq V=0.1 the beer is terrible, and the food was the worst. V=0.3 the beer wasn’t bad, and the food wasn’t great too. V=0.5 the food is ok, but not worth the drive to the strip. V=0.7 the beer is good, and the food is great. V=0.9 the wine is great, and the food is extremely fantastic. Output Revised-VAE + Lextra V=0.1 n’t no about about no when about that was when about V=0.3 the beer sucks , but the food is not typical time. V=0.5 the beer is cheap, but the food was salty and decor. V=0.7 i just because decent management salty were impersonal. V=0.9 n’t that about was that when was about as when was Table 4: Example outputs with five sentiment intensity values V ranging from 0 to 1. mance declines most when without pre-training. This reveals that reinforcement learning is heavily dependent on pre-training as a warm start because it is hard for RL architecture to train from scratch. Moreover, no pre-training will lead the model to generate frequent words and short sentence which gets low PPL score. What’s more, the performance of ablated version without cycle reconstruction also drops significantly, since cycle reconstruction plays an important role of teacherforcing in our paper. Finally, even though the proposed Seq2SentiSeq without reinforcement learning can beat the best baseline in terms of human average score, reinforcement learning still helps to boost the performance of the proposed model by a large margin. 4.3 Case Study Table 4 shows the example outputs on the YELP datasets with five sentiment intensity values. This case demonstrates that our model can both preserve the content (“beer”, “food”) and change the sentiment to the desired intensity. More importantly, our model can capture the subtle sentiment difference of the words or phrases, e.g., “the worst” →“bad” →“ok” →“good” →“extremely fantastic”. However, the Revised-VAE + Lextra system does not show this sentiment trend and may collapse when intensity value V is very 2020 Semantic Embeddings Sentiment Embeddings Figure 4: t-SNE visualization of the semantic embeddings (upper) and sentiment embeddings (lower) in the Seq2SentiSeq model. small (0.1) or very big (0.9). And our model sometimes may also suffer from semantic drift, e.g., “beer” is revised to “wine”. 4.4 Analysis on Sentiment Representation We also conduct analysis to understand the sentiment representations of words introduced in our model. We use the 1000 most frequent words from the training dataset. Then, we use a human annotated sentiment lexicon (Hutto and Gilbert, 2014) to classify them into three categories: positive, neutral and negative. After that, we get 112 positive words, 841 neutral words and 47 negative words. Finally, we apply t-SNE (Rauber et al., 2016) to visualize both semantic and sentiment embeddings of the proposed model (Figure 2) when finished training. As shown in Figure 4, we can see that the distributions of the two embeddings are significantly different. In the semantic embedding space, most of the positive words and negative words lie closely. On the contrary, in the sentiment embedding space, positive words are far from negative words. In conclusion, neighbors on semantic embedding space are semantically related, while neighbors on sentiment embedding space express a similar sentiment intensity. 5 Related Work Recently, there is a growing literature on the task of unsupervised sentiment transfer. This task aims to reverse the sentiment polarity of a sentence but keep its content unchanged without parallel data (Fu et al., 2018; Tsvetkov et al., 2018; Li et al., 2018; Xu et al., 2018; Lample et al., 2019). However, there are few researches focus on the fine-grained control of sentiment. Liao et al. (2018) exploits pseudo-parallel data via heuristic rules, thus turns this task to a supervised setting. They then propose a model based on Variational Autoencoder (VAE) to first disentangle the content factor and source sentiment factor, and then combine the content with target sentiment factor. However, the quality of the pseudo-parallel data is not quite satisfactory, which seriously affects the performance of the VAE model. Different from them, we dynamically update the pseudo-parallel data via on-the-fly back-translation (Lample et al., 2018b) during training (Eq. 12). There are some other tasks of NLP also show interest in controlling the fine-grained attribute of text generation. For example, Zhang et al. (2018a) and Ke et al. (2018) propose to control the specificity and diversity in dialogue generation. We borrow ideas from these works but the motivation and proposed models of our work are a far cry from them. The main differences are: (1) Since sentiment is dependent on local context while specificity is independent of local context, there is a series of design in our model to take the local context (or previous generated words) st into consideration (e.g., Eq. 1, Eq. 3). (2) Due to the lack of parallel data, we propose a cycle reinforcement learning algorithm to train the proposed model (Section 2.3). 6 Conclusion In this paper, we focus on solving the finegrained text sentiment transfer task, which is a natural extension of the binary sentiment transfer task but with more challenges. We propose a Seq2SentiSeq model to achieve the aim of controlling the fine-grained sentiment intensity of the generated sentence. In order to train the proposed model without any parallel data, we design a cycle reinforcement learning algorithm. We apply the proposed approach to the Yelp review dataset, obtaining state-of-the-art results in both automatic evaluation and human evaluation. 2021 Acknowledgments This paper is supported by NSFC project 61751201, 61772040 and 61876004. The contact authors are Baobao Chang and Zhifang Sui. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations, ICLR 2014. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI-18, pages 663–670. Clayton J. Hutto and Eric Gilbert. 2014. VADER: A parsimonious rule-based model for sentiment analysis of social media text. In Proceedings of the Eighth International Conference on Weblogs and Social Media, ICWSM 2014. Pei Ke, Jian Guan, Minlie Huang, and Xiaoyan Zhu. 2018. Generating informative responses with controlled sentence function. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, pages 1746–1751. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of International Conference on Learning Representations, ICLR 2014. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In Proceedings of the International Conference on Learning Representations, ICLR 2018. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018, pages 5039–5049. Guillaume Lample, Sandeep Subramanian, Eric Smith, Ludovic Denoyer, Marc’Aurelio Ranzato, and YLan Boureau. 2019. Multiple-attribute text rewriting. In International Conference on Learning Representations, ICLR 2019. Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, pages 2157–2169. Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, pages 1865–1874. Yi Liao, Lidong Bing, Piji Li, Shuming Shi, Wai Lam, and Tong Zhang. 2018. QuaSE: Sequence editing under quantifiable guidance. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018, pages 3855– 3864. Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, and Xu Sun. 2019. A dual reinforcement learning framework for unsupervised text style transfer. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, pages 1412– 1421. Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval-2018 task 1: Affect in tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT, pages 1–17. Amr Eldesoky Mousa and Bjorn W Schuller. 2017. Contextual bidirectional long short-term memory recurrent neural network language models: A generative approach to sentiment analysis. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, pages 1023– 1032. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, ACL 2002, pages 311– 318. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. In Proceedings of the International Conference on Learning Representations, ICLR 2017. Paulo E. Rauber, Alexandre X. Falc˜ao, and Alexandru C. Telea. 2016. Visualizing time-dependent data using dynamic t-SNE. In Eurographics Conference on Visualization, EuroVis 2016, pages 73–77. 2022 C´ıcero Nogueira dos Santos, Igor Melnyk, and Inkit Padhi. 2018. Fighting offensive language on social media with unsupervised text style transfer. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, pages 189–194. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in Neural Information Processing Systems, NIPS 2017, pages 6833– 6844. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. In Journal of Machine Learning Research, pages 1929–1958. Yulia Tsvetkov, Alan W. Black, Ruslan Salakhutdinov, and Shrimai Prabhumoye. 2018. Style transfer through back-translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, pages 866–876. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. In Machine Learning, pages 229– 256. Jingjing Xu, Xu Sun, Qi Zeng, Xiaodong Zhang, Xuancheng Ren, Houfeng Wang, and Wenjie Li. 2018. Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, pages 979–988. Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2018a. Learning to control the specificity in neural response generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, pages 1108–1117. You Zhang, Hang Yuan, Jin Wang, and Xuejie Zhang. 2017. YNU-HPCC at EmoInt-2017: Using a CNNLSTM model for sentiment intensity prediction. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, WASSA@EMNLP 2017, pages 200–204. Zhirui Zhang, Shuo Ren, Shujie Liu, Jianyong Wang, Peng Chen, Mu Li, Ming Zhou, and Enhong Chen. 2018b. Style transfer as unsupervised machine translation. In arXiv preprint arXiv:1808.07894.
2019
194
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2023–2035 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2023 Data-to-text Generation with Entity Modeling Ratish Puduppully and Li Dong and Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB [email protected] [email protected] [email protected] Abstract Recent approaches to data-to-text generation have shown great promise thanks to the use of large-scale datasets and the application of neural network architectures which are trained end-to-end. These models rely on representation learning to select content appropriately, structure it coherently, and verbalize it grammatically, treating entities as nothing more than vocabulary tokens. In this work we propose an entity-centric neural architecture for data-to-text generation. Our model creates entity-specific representations which are dynamically updated. Text is generated conditioned on the data input and entity memory representations using hierarchical attention at each time step. We present experiments on the ROTOWIRE benchmark and a (five times larger) new dataset on the baseball domain which we create. Our results show that the proposed model outperforms competitive baselines in automatic and human evaluation.1 1 Introduction Data-to-text generation is the task of generating textual output from non-linguistic input (Reiter and Dale, 1997; Gatt and Krahmer, 2018). The input may take on several guises including tables of records, simulations of physical systems, spreadsheets, and so on. As an example, Figure 1 shows (in a table format) the scoring summary of a major league baseball (MLB) game, a play-by-play summary with details of the most important events in the game recorded chronologically (i.e., in which play), and a human-written summary. Modern approaches to data-to-text generation have shown great promise (Lebret et al., 2016; Mei et al., 2016; Perez-Beltrachini and Lapata, 2018; Puduppully et al., 2019; Wiseman et al., 1Our code and dataset can be found at https:// github.com/ratishsp/data2text-entity-py. 2017) thanks to the use of large-scale datasets and neural network models which are trained end-toend based on the very successful encoder-decoder architecture (Bahdanau et al., 2015). In contrast to traditional methods which typically implement pipeline-style architectures (Reiter and Dale, 2000) with modules devoted to individual generation components (e.g., content selection or lexical choice), neural models have no special-purpose mechanisms for ensuring how to best generate a text. They simply rely on representation learning to select content appropriately, structure it coherently, and verbalize it grammatically. In this paper we are interested in the generation of descriptive texts such as the game summary shown in Figure 1. Descriptive texts are often characterized as “entity coherent” which means that their coherence is based on the way entities (also known as domain objects or concepts) are introduced and discussed in the discourse (Karamanis et al., 2004). Without knowing anything about baseball or how game summaries are typically written, a glance at the text in Figure 1 reveals that it is about a few entities, namely players who had an important part in the game (e.g., Brad Keller, Hunter Dozier) and their respective teams (e.g., Orioles, Royals). The prominent role of entities in achieving discourse coherence has been long recognized within the linguistic and cognitive science literature (Kuno, 1972; Chafe, 1976; Halliday and Hasan, 1976; Karttunen, 1976; Clark and Haviland, 1977; Prince, 1981), with Centering Theory (Grosz et al., 1995) being most prominent at formalizing how entities are linguistically realized and distributed in texts. In this work we propose an entity-centric neural architecture for data-to-text generation. Instead of treating entities as ordinary tokens, we create entity-specific representations (i.e., for players and teams) which are dynamically updated as text is 2024 TEAM Inn1 Inn2 Inn3 Inn4 . . . R H E . . . Orioles 1 0 0 0 . . . 2 4 0 . . . Royals 1 0 0 3 . . . 9 14 1 . . . BATTER H/V AB R H RBI TEAM . . . C. Mullins H 4 2 2 1 Orioles . . . J. Villar H 4 0 0 0 Orioles . . . W. Merrifield V 2 3 2 1 Royals . . . R. O’Hearn V 5 1 3 4 Royals . . . . . . . . . . . . . . . . . . . . . . . . PITCHER H/V W L IP H R ER BB K . . . A. Cashner H 4 13 5.1 9 4 4 3 1 . . . B. Keller V 7 5 8.0 4 2 2 2 4 . . . . . . . . . . . . . . . . . . . . . . . . Inn1: innings, R: runs, H: hits, E: errors, AB: at-bats, RBI: runs-batted-in, H/V: home or visiting, W: wins, L: losses, IP: innings pitched, ER: earned runs, BB: walks, K: strike outs. KANSAS CITY, Mo. – Brad Keller kept up his recent pitching surge with another strong outing. Keller gave up a home run to the first batter of the game – Cedric Mullins – but quickly settled in to pitch eight strong innings in the Kansas City Royals’ 9–2 win over the Baltimore Orioles in a matchup of the teams with the worst records in the majors. Keller (7–5) gave up two runs and four hits with two walks and four strikeouts to improve to 3–0 with a 2.16 ERA in his last four starts. Ryan O’Hearn homered among his three hits and drove in four runs, Whit Merrifield scored three runs, and Hunter Dozier and Cam Gallagher also went deep to help the Royals win for the fifth time in six games on their current homestand. With the scored tied 1–1 in the fourth, Andrew Cashner (4–13) gave up a sacrifice fly to Merrifield after loading the bases on two walks and a single. Dozier led off the fifth inning with a 423-foot home run to left field to make it 3-1. The Orioles pulled within a run in the sixth when Mullins led off with a double just beyond the reach of Dozier at third, advanced to third on a fly ball and scored on Trey Mancini’s sacrifice fly to the wall in right. The Royals answered in the bottom of the inning as Gallagher hit his first home run of the season. . . BATTER PITCHER SCORER EVENT TEAM INN RUNS . . . C. Mullins B. Keller Home run Orioles 1 1 . . . H. Dozier A. Cashner W. Merrifield Grounded into DP Royals 1 1 . . . W. Merrifield A. Cashner B. Goodwin Sac fly Royals 4 2 . . . H. Dozier A. Cashner Home run Royals 4 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 1: MLB statistics tables and game summary. The tables summarize the performance of the two teams and of individual team members who played as batters and pitchers as well as the most important events (and their actors) in each play. Recurring entities in the summary are boldfaced and colorcoded, singletons are shown in black. being generated. Our model generates descriptive texts with a decoder augmented with a memory cell and a processor for each entity. At each time step in the decoder, the processor computes an updated representation of the entity as an interpolation between a candidate entity memory and its previous value. Processors are each a gated recurrent neural network and parameters among them are shared. The model generates text by hierarchically attending over memory cells and the records corresponding to them. We report experiments on the benchmark ROTOWIRE dataset (Wiseman et al., 2017) which contains statistics of NBA basketball games paired with human-written summaries. In addition, we create a new dataset for MLB (see Figure 1). Compared to ROTOWIRE, MLB summaries are longer (approximately by 50%) and the input records are richer and more structured (with the addition of play-by-play). Moreover, the MLB dataset is five times larger in terms of data size (i.e., pairs of tables and game summaries). We compare our entity model against a range of recently proposed neural architectures including an encoder-decoder model with conditional copy (Wiseman et al., 2017) and a variant thereof which generates texts while taking content plans into account (Puduppully et al., 2019). Our results show that modeling entities explicitly is beneficial and leads to output which is not only more coherent but also more concise and grammatical across both datasets. Our contributions in this work are three-fold: a novel entity-aware model for data-to-text generation which is linguistically motivated, yet resource lean (no preprocessing is required, e.g., to extract document plans); a new dataset for data-to-text generation which we hope will encourage further work in this area; a comprehensive evaluation and comparison study which highlights the merits and shortcomings of various recently proposed datato-text generation models on two datasets. 2 Related Work The sports domain has attracted considerable attention since the early days of generation systems (Robin, 1994; Tanaka-Ishii et al., 1998). Likewise, a variety of coherence theories have been developed over the years (e.g., Mann and Thomson 1988; Grosz et al. 1995) and their principles have found application in many symbolic text generation systems (e.g., Scott and de Souza 1990; Kibble and Power 2004). Modeling entities and their communicative actions has also been shown to improve system output in interactive storytelling 2025 (Cavazza et al., 2002; Cavazza and Charles, 2005) and dialogue generation (Walker et al., 2011). More recently, the benefits of modeling entities explicitly have been demonstrated in various tasks and neural network models. Ji et al. (2017) make use of dynamic entity representations for language modeling. And Clark et al. (2018) extend this work by adding entity context as input to the decoder. Both approaches condition on a single entity at a time, while we dynamically represent and condition on multiple entities in parallel. Kiddon et al. (2016) make use of fixed entity representations to improve the coverage and coherence of the output for recipe generation. Bosselut et al. (2018) model actions and their effects on entities for the same task. However, in contrast to our work, they keep entity representations fixed during generation. Henaff et al. (2017) make use of dynamic entity representations in machine reading. Entity representations are scored against a query vector to directly predict an output class or combined as a weighted sum followed by softmax over the vocabulary. We make use of a similar entity representation model, extend it with hierarchical attention and apply it to data-to text generation. The hierarchical attention mechanism was first introduced in Yang et al. (2016) as a way of learning document-level representations. We apply attention over records and subsequently over entity memories. Several models have been proposed in the last few years for data-to-text generation (Mei et al. 2016; Lebret et al. 2016; Wiseman et al. 2017, inter alia) based on the very successful encoderdecoder architecture (Bahdanau et al., 2015). Various attempts have also been made to improve these models, e.g., by adding content selection (PerezBeltrachini and Lapata, 2018) and content planning (Puduppully et al., 2019) mechanisms. However, we are not aware of any prior work in this area which explicitly handles entities and their generation in discourse context. 3 Background: Encoder-Decoder with Conditional Copy The input to our model is a table of records (see Figure 1). Records in turn have features, represented as {rj,l}L l=1 where L is the number of features in each record. Examples of features are values (rj,1; e.g., 8.0, Baltimore) or entities (rj,2; e.g., Orioles, C. Mullins). The model output y is a document containing words y = y1 · · · y|y| where |y| is the document length. Following previous work (Wiseman et al., 2017; Puduppully et al., 2019), we embed features into vectors, and then use a multilayer perceptron to obtain a vector representation rj for each record: rj = ReLU(Wr[rj,1; rj,2; ...; rj,L] + br) (1) where [; ] indicates vector concatenation, Wr ∈ Rn×nL, br ∈Rn are parameters, and ReLU is the rectifier activation function. Let {ej}|r| j=1 denote the output of the encoder. We use an LSTM decoder to compute the probability of each target word, conditioned on previously generated words, and on ej. In the case of ROTOWIRE, we follow previous work (Wiseman et al., 2017; Puduppully et al., 2019) and consider ej = rj. The first hidden state of the decoder is initialized by the average of the record vectors, avg({ej}|r| j=1). In the case of MLB, information encoded in play-by-play is sequential. Recall, that it documents the most important events in a game in chronological order. To account for this, we encode MLB records into {ej}|r| j=1 with a bidirectional LSTM. We impose an ordering on records in the box score (i.e., home team followed by away team) which is in turn followed by play-by-play where records are naturally ordered by time. The decoder is initialized with the concatenation of the hidden states of the final step of the encoder. At time step t, the input to the decoder LSTM is the embedding of the previously predicted word yt−1. Let dt denote the hidden state of the t-th LSTM unit. We compute attention scores αt,j over the encoder output ej and obtain dynamic context vector qt as the weighted sum of the hidden states of the input: αt,j ∝exp(d⊺ t Waej) qt = X j αt,jej datt t = tanh(Wc[dt; qt]) (2) where Wa ∈Rn×n, P j αt,j = 1, Wc ∈Rn×2n, and datt t is the attention vector. The probability of output text y conditioned on the input table r is modeled as: pgen(yt|y<t, r)=softmaxyt(Wydatt t + by) (3) 2026 Gate 𝑔2,1 𝑔2,2 𝑔2,𝑍 𝑠𝑡,2 𝑑𝑡 𝛼𝑡,2,1 𝛼𝑡,2,2 𝛼𝑡,2,𝑍 𝑠𝑡,1 𝑠𝑡,𝐾 𝑞𝑡 u𝑡,1 u𝑡,2 u𝑡,𝐾 𝑑𝑡 ψ𝑡1 ψ𝑡2 ψ𝑡𝐾 𝑓ϴ δ𝑡,1 ũ𝑡,1 Entity Memory u𝑡−1,1 𝑓ϴ δ𝑡,𝐾 ũ𝑡,𝐾 Gate u𝑡−1,𝐾 A B C … … … … … Figure 2: Diagram of entity memory network (block A) and hierarchical attention (blocks B and C). Module fθ represents update equations (6)–(8) where θ is the set of trainable parameters. The gate represents the entity memory update (Equation (9)). Block B covers Equations (10) and (11), and block C Equations (12) and (13). where Wy ∈R|Vy|×n, by ∈R|Vy| are parameters and |Vy| is the output vocabulary size. We further augment the decoder with a copy mechanism i.e., the ability to copy values from the input; copy implies yt = rj,1 for some t and j (e.g., Royals, Orioles, 9, 2 in the summary in Figure 1 are copied from r). We use the conditional copy method proposed in Gulcehre et al. (2016) where a binary variable is introduced as a switch gate to indicate whether yt is copied or not. 4 Entity Memory and Hierarchical Attention We extend the basic model from Section 3 with entity memory and hierarchical attention. Figure 2 provides a schematic overview of our architecture. 4.1 Entity Memory In order to render the model entity-aware, we compute xk as an average of record representation for each unique entity k (i.e., one of rj,2 values): xk = X j (1[rj,2 = k]rj)/ X j 1[rj,2 = k] (4) where 1[x] = 1 if x is true, and 0 otherwise. We initialize ut=−1,k, the memory representation of an entity at time t = −1, as: ut=−1,k = Wixk (5) where ut=−1,k ∈Rp and Wi ∈Rp×n. To capture the fact that discourse in descriptive texts may shift from one entity to the next, e.g., some entities may be salient in the beginning of the game summary (see Brad Kelly in the text in Figure 1), others only towards the end (see Dozier in Figure 1), and a few throughout (e.g., references to teams), we update entity representations at each time step during decoding. We use gate γt to indicate whether there should be an update in the entity representation: γt = σ(Wddt + bd) (6) where t >= 0, σ is the sigmoid function, Wd ∈ Rp×p, and bd ∈Rp. We also compute δt,k, the extent to which the entity representation should change, and ˜ut,k , the memory of the candidate entity: δt,k =γt ⊙σ(Wedt+be+Wfut−1,k+bf) (7) ˜ut,k =Wgdt (8) where ⊙denotes element-wise multiplication, We, ∈Rp×n, Wf ∈Rp×p, be, bf ∈Rp, and γt, δt,k ∈[0, 1]p (see block A in Figure 2). An element in gate γt will have value approaching 1 if an update in any ut−1,k is required. The value of an element in gate δt,k will approach 1 if the corresponding value of the element in ut−1,k changes. Equation (9) computes the update in entity memory as an interpolation over the gated representation of the previous value of the entity 2027 memory and the candidate entity memory: ut,k = (1 −δt,k) ⊙ut−1,k + δt,k ⊙˜ut,k (9) where ut,k represents entity k at time t. Previous work (Henaff et al., 2017; Ji et al., 2017; Clark et al., 2018) employs a normalization term over ut,k. We empirically found that normalization hurts performance and hence did not include it in our model. 4.2 Hierarchical Attention We hypothesize that our generator should first focus on entities (e.g., the main players and their teams) and then on the records corresponding to theses entities (e.g, player performance in the game). Our model implements this view of text generation via a hierarchical attention mechanism which we explain below. We also expect that focusing on entities first should improve the precision of the texts we generate as the entity distribution will constrain the probability distribution of records corresponding to each entity. To better understand the hierarchical attention mechanism, we can view the encoder output ej as a 2-dimensional array gk,z where k ∈[1, K] represents entities and z ∈[1, Z] represents records of entities and there is a one-to-one correspondence between positions j and k, z. We compute attention over gk,z, the encoder output, as: αt,k,z ∝exp(d⊺ t Wagk,z) (10) where Wa ∈Rn×n, P z αt,k,z = 1 (see block B in Figure 2). We compute the entity context as: st,k = X z αt,k,zgk,z (11) while attention over entity vectors ut,k is: Ψt,k ∝exp(d⊺ t Whut,k) (12) with Wh ∈Rn×p, P k Ψt,k = 1. And the encoder context qt (see block C in Figure 2) is computed as follows: qt = X k Ψt,kst,k (13) We feed qt into Equation (2) and compute pgen(yt|y<t, r), the probability of generating output text y conditioned on records r, as shown in Equation (3). ROTOWIRE MLB Vocab Size 11.3K 38.9K # Tokens 1.5M 14.3M # Instances 4.9K 26.3K Avg Length 337.1 542.05 # Record Types 39 53 Avg Records 628 565 Table 1: Vocabulary size, number of tokens, number of instances (i.e., record-summary pairs), average summary length, number of record types and average number of records in ROTOWIRE and MLB datasets. We experimented with feeding P k Ψt,kut,k as input context along the lines of Clark et al. (2018); however, results on the development dataset degraded performance, and we did not pursue this approach further. 5 Training and Inference Our training objective maximizes the log likelihood of output text given an input table of records: max X (r,y)∈D log p (y|r) where D is the training set consisting of pairs of record tables and output game summaries. During inference, we make use of beam search to approximately obtain the best output ˆy among candidate outputs y′: ˆy = arg max y′ p(y′|r) 6 Experimental Setup Data We performed experiments on two datasets. The first one is ROTOWIRE (Wiseman et al., 2017) which contains NBA basketball game statistics matched with human-written summaries. In addition, we created MLB, a new dataset which contains baseball statistics and corresponding human-authored summaries obtained from the ESPN website.2 Basic statistics on the two datasets are given in Table 1. As can be seen, MLB is approximately five times larger than ROTOWIRE, with richer vocabulary and longer summaries. For ROTOWIRE, we used the official training, development, and test splits of 3,398/727/728 instances. Analogously, for MLB we created a split of 22,821/1,739/1,744 instances. Game summaries in MLB were tokenized 2http://www.espn.com/mlb/recap?gameId={gameid} 2028 using nltk and hyphenated words were separated. Sentences containing quotes were removed as they included opinions and non-factual statements unrelated to the input tables. Sometimes MLB summaries contain a “Game notes” section with incidental information which was also removed. For MLB, the value of L in Equation (1) is 6, and for ROTOWIRE it is 4. The first four features are similar in both datasets and include value (rj,1; e.g., 8.0, Baltimore), entity (rj,2; e.g., Orioles, C. Mullins), record type (rj,3; e.g., RBI, R,H) and whether a player is on the home- or away- team (rj,4). MLB has two additional features which include the inning of play (rj,5; e.g., 9, 7, and -1 for records in the box score), and play index, a unique play identifier for a set of records in a play (rj,6; e.g., 0, 10, and -1 for records in the box score). Information Extraction For automatic evaluation, we make use of the Information Extraction (IE) approach proposed in Wiseman et al. (2017). The idea is to use a fairly accurate IE tool to extract relations from gold summaries and model summaries and then quantify the extent to which the extracted relations align or diverge (see Section 7 for the specific metrics we use). The IE system first identifies candidate entities (i.e., players, teams) and values (i.e., numbers), and given an “entity, value” pair it predicts the type of relation. For example, in ROTOWIRE, the relation for the pair “Kobe Bryant, 40” is PTS. Training data for the IE system is obtained automatically by matching entity-value pairs from summary sentences against record types. The IE system has an ensemble architecture which combines convolutional and bidirectional LSTM models. We reused the updated IE models from Puduppully et al. (2019) for ROTOWIRE3 and trained our own IE system for MLB. Box and line scores in MLB are identical in format to ROTOWIRE and pose no particular problems to the IE system. However, it is difficult to extract information from play-by-play and match it against the input tables. Consider the sentences Ryan O’Hearn homered or Keller gave up a home run from Figure 1 where we can identify entities (Ryan O’Hearn, Keller) and record types (home-run-batter, home-run-pitcher) but no specific values. We created a dummy value of -1 for such cases and the IE system was trained to predict the record type of entity value pairs such as (Ryan O’Hearn, -1) or (Keller, -1). Moreover, 3https://github.com/ratishsp/data2text-1/ the IE system does not capture attributes such as inning and team scores in play-by-play as it is difficult to deterministically match these against corresponding spans in text. The IE system thus would not be able to identify any records in the snippet tied 1–1 in the fourth. On MLB, the system achieved 83.4% precision and 66.7% recall (on held out data). We note that designing a highly accurate IE module for MLB is in itself a research challenge and outside the scope of this paper. In order to compare our model against Puduppully et al. (2019), we must have access to content plans which we extracted from ROTOWIRE and MLB by running the IE tool on gold summaries (training set). We expect the relatively low IE recall on MLB to disadvantage their model which relies on accurate content plans. Training Configuration Model hyperparameters were tuned on the development set. We used the Adagrad optimizer (Duchi et al., 2011) with an initial learning rate of 0.15, decayed by 0.97 for every epoch after the 4th epoch. We used truncated BPTT (Williams and Peng, 1990) of length 100 and made use of input feeding (Luong et al., 2015). We summarize the hyperparameters of the ROTOWIRE and MLB models in the Appendix. All models were implemented on a fork of OpenNMT-py (Klein et al., 2017). System Comparison We compared our entity model against the following systems: TEMPL is a template-based generator; we reused TEMPL from Wiseman et al. (2017) for ROTOWIRE and created a new system for MLB. The latter consists of an opening sentence about the two teams playing the game. It then describes statistics of pitchers (innings pitched, runs and hits given etc.) followed by a description of play-by-play (home run, single, double, triple etc.). ED+CC is the encoder-decoder model with conditional copy from Section 3 and the best performing system in Wiseman et al. (2017). NCP+CC is the best performing system in Puduppully et al. (2019); it generates content plans by making use of pointer networks (Vinyals et al., 2015) to point to the input ej; the resultant content plans are then encoded using a BiLSTM followed by an LSTM decoder with an attention and copy mechanism. 2029 RW RG CS CO BLEU # P% P% R% DLD% TEMPL 54.23 99.94 26.99 58.16 14.92 8.46 WS-2017 23.72 74.80 29.49 36.18 15.42 14.19 NCP+CC 34.28 87.47 34.18 51.22 18.58 16.50 ENT 30.11 92.69 38.64 48.51 20.17 16.12 MLB RG CS CO BLEU # P% P% R% DLD% TEMPL 59.93 97.96 22.82 68.46 10.64 3.81 ED+CC 18.69 92.19 62.01 50.12 25.44 9.69 NCP+CC 17.93 88.11 60.48 55.13 26.71 9.68 ENT 21.35 88.29 58.35 61.14 24.51 11.51 Table 2: Evaluation on ROTOWIRE (RW) and MLB test sets using relation generation (RG) count (#) and precision (P%), content selection (CS) precision (P%) and recall (R%), content ordering (CO) in normalized Damerau-Levenshtein distance (DLD%), and BLEU. 7 Results Automatic Evaluation We first discuss the results of automatic evaluation using the metrics defined in Wiseman et al. (2017). Let ˆy be the gold output and y the model output. Relation Generation measures how factual y is compared to input r. Specifically, it measures the precision and number of relations extracted from y which are also found in r. Content Selection measures the precision and recall of relations between ˆy and y. Content Ordering measures the DamerauLevenshtein distance between relations in y and relations in ˆy. In addition, we also report BLEU (Papineni et al., 2002) with the gold summaries as reference. Table 2 (top) summarizes our results on the ROTOWIRE test set (results on the development set are available in the Appendix). We report results for our dynamic entity memory model (ENT), the best system of Wiseman et al. (2017) (WS2017) which is an encoder-decoder model with conditional copy, and NCP+CC (Puduppully et al., 2019). We see that ENT achieves scores comparable to NCP+CC, but performs better on the metrics of RG precision, CS precision, and CO. ENT achieves substantially higher scores in CS precision compared to WS-2017 and NCP+CC, without any planning component; CS recall is worse for ENT compared to NCP+CC mainly because the latter model is trained to first create a content plan with good coverage of what to say. Table 2 (bottom) also presents our results on MLB (test set). Note that ED+CC is a reimplementation of Wiseman et al.’s (2017) encoderRW RG CS CO BLEU # P% P% R% DLD% ED+CC 22.68 79.40 29.96 34.11 16.00 14.00 +Hier 30.76 93.02 33.99 44.79 19.03 14.19 +Dyn 27.93 90.85 34.19 42.27 18.47 15.40 +Gate 31.84 91.97 36.65 48.18 19.68 15.97 MLB RG CS CO BLEU # P% P% R% DLD% ED+CC 18.69 92.65 62.29 51.36 25.93 9.55 +Hier 19.02 93.71 62.84 52.12 25.72 10.38 +Dyn 20.28 89.19 58.19 58.94 24.49 10.85 +Gate 21.32 88.16 57.36 61.50 24.87 11.13 Table 3: Ablation results on ROTOWIRE (RW) and MLB development set using relation generation (RG) count (#) and precision (P%), content selection (CS) precision (P%) and recall (R%), content ordering (CO) in normalized Damerau-Levenshtein distance (DLD%), and BLEU. decoder model (with conditional copy) on MLB. We see that ENT achieves highest BLEU amongst all models and highest CS recall and RG count amongst neural models. The RG precision of ENT is lower than ED+CC. Inspection of model output revealed that on MLB, ED+CC tends to focus on one or two players getting most of the facts about them right, whereas ENT sometimes gets the coreference wrong, and thus lower RG precision. The TEMPL system scores highest on RG precision and count, and CS recall on both datasets. This is because TEMPL can make use of domain knowledge which is not available to the neural models. TEMPL performs poorly on MLB in terms of BLEU, in fact it is considerably worse compared to the similar template system on ROTOWIRE (see Table 2). This suggests that the task of creating MLB game summaries is hard, even for a template system which does not perform any sophisticated generation. Ablation Experiments We further examined how individual model components contribute to the quality of the generated summaries. To assess the impact of hierarchical attention (Section 4.2) over ED+CC, we report the performance of a stripped-down variant of our model without dynamic entity memory. Specifically, the entity memory was kept static and set to ut=−1,k (see Equation (5)). In this model, attention over entity vectors is: Ψt,k ∝exp(d⊺ t Whut=−1,k) (14) We next examined the contribution of dynamic memory, by adding it to this model without the 2030 gate γt (i.e., we set γt to one) and Equation (7) then becomes: δt,k = σ(Wedt + be + Wfut−1,k + bf) (15) Finally, we obtain our final ENT model, by incorporating the update gate mechanism. The results of the ablation study are shown in Table 3. We compare ED+CC against variants “+Hier”, “+Dyn” and “+Gate” corresponding to successively adding hierarchical attention, dynamic memory, and the update gate mechanism. On both datasets, hierarchical attention, improves relation generation, content selection, and BLEU. Dynamic memory and the update gate brings further improvements to content selection and BLEU. Because it conditions on entities, ENT is able to produce text displaying nominal coreference which is absent from the outputs of ED+CC and WS-2017. We present an example in Table 4 (and in the Appendix) where entities Dwight Howard and James Harden are introduced and then later referred to as Howard and Harden. We also see that while generating the last sentence about the next game, ENT is able to switch the focus of attention from one team (Rockets) to the other (Nuggets), while NCP+CC verbalises Nuggets twice. Human-Based Evaluation Following earlier work (Wiseman et al., 2017; Puduppully et al., 2019), we also evaluated our model by asking humans to rate its output in terms of relation generation, coherence, grammaticality, and conciseness. Our studies were conducted on the Amazon Mechanical Turk platform. For ROTOWIRE, we compared ENT against NCP+CC, Gold, and TEMPL. We did not compare against WS-2017 or ED+CC, since prior work (Puduppully et al., 2019) has shown that NCP+CC is superior to these models in terms of automatic and human-based evaluation. For MLB, we compared ENT against NCP+CC, ED+CC, Gold, and TEMPL. In the first study, participants were presented with sentences randomly selected from the game summary (test set) together with corresponding box and line score tables and were asked to count supporting and contradicting facts in these sentences. We evaluated 30 summaries and 4 sentences per summary for each of ROTOWIRE and MLB. We elicited 5 responses per summary. As shown in Table 5, on ROTOWIRE ENT yields a comparable number of supporting and contradicting facts to NCP+CC (the difference is The Houston Rockets (18–5) defeated the Denver Nuggets (10–13) 108–96 on Tuesday at the Toyota Center in Houston. The Rockets had a strong first half where they out– scored . . . The Rockets were led by Donatas Motiejunas, who scored a game–high of 25 points . . . James Harden also played a factor in the win, as he went 7–for . . . Coming off the bench, Donatas Motiejunas had a big game and finished with 25 points . . . The only other player to reach double figures in points was Arron Afflalo, who came off the bench for 12 points . . . Coming off the bench, Arron Afflalo chipped in with 12 points . . . The Nuggets’ next game will be on the road against the Boston Celtics on Friday, while the Nuggets will travel to Boston to play the Celtics on Wednesday. The Houston Rockets (18–5) defeated the Denver Nuggets (10–13) 108–96 on Monday at the Toyota Center in Houston. The Rockets were the superior shooters in this game, going . . . The Rockets were led by the duo of Dwight Howard and James Harden. Howard shot 9–for–11 from the field and . . . Harden on the other hand recorded 24 points (7–20 FG, 2–5 3Pt, 8–9 FT), 10 rebounds and 10 assists, The only other Nugget to reach double figures in points was Arron Afflalo, who finished with 12 points (4– 17 FG,. . . The Rockets’ next game will be on the road against the New Orleans Pelicans on Wednesday, while the Nuggets will travel to Los Angeles to play the Clippers on Friday. Table 4: Examples of model output for NCP+CC (top) and ENT (bottom) on ROTOWIRE. Recurring entities in the summaries are boldfaced and colorcoded, singletons are shown in black. not statistically significant). TEMPL has the highest number of supporting facts, even relative to gold summaries, and very few contradicting facts. This is expected as TEMPL output is mostly factual, it essentially parrots statistics from the tables. On MLB, ENT yields a number of supporting facts comparable to Gold and NCP+CC, but significantly lower than ED+CC and TEMPL. Contradicting facts are significantly lower for ENT compared to NCP+CC, but comparable to ED+CC and higher than TEMPL and Gold. We also evaluated the quality of the generated summaries. Following earlier work (Puduppully et al., 2019), we presented participants with two summaries at a time and asked them to choose which one is better in terms of Grammaticality (is the summary written in well-formed English?), Coherence (do the sentences in summary follow a coherent discourse?), and Conciseness (does the summary tend to repeat the same content?) We divided the four competing systems (Gold, TEMPL, NCP+CC, and ENT) into six pairs of summaries for ROTOWIRE and the five competing systems (Gold, TEMPL, ED+CC, NCP+CC, and ENT) into ten pairs for MLB. We used Best-Worst scaling (Louviere and Woodworth, 1991; Louviere 2031 ROTOWIRE #Supp #Contra Gram Coher Concis Gold 2.98* 0.28* 4.07* 3.33 -10.74* TEMPL 6.98* 0.21* -3.70* -3.33* 17.78* NCP+CC 4.90 0.90 -3.33* -3.70* -3.70 ENT 4.77 0.80 2.96 3.70 -3.33 MLB #Supp #Contra Gram Coher Concis Gold 2.81 0.15* 1.24* 3.48* -9.33* TEMPL 3.98* 0.04* -10.67* -7.30* 8.43* ED+CC 3.24* 0.40 0.22* -0.90* -2.47* NCP+CC 2.86 0.88* 0.90* -1.35* -1.80* ENT 2.86 0.52 8.31 6.07 5.39 Table 5: Average number of supporting and contradicting facts in game summaries and best-worst scaling evaluation (higher is better) on ROTOWIRE and MLB datasets. Systems significantly different from ENT are marked with an asterisk * (using a one-way ANOVA with posthoc Tukey HSD tests; p ≤0.05). . et al., 2015), a more reliable alternative to rating scales. The score of a system is computed as the number of times it was rated best minus the number of times is rated worst (Orme, 2009). Scores range from −100 (absolutely worst) to 100 (absolutely best). We elicited judgments for 30 test summaries for ROTOWIRE and MLB; each summary was rated by 3 participants. As shown in Table 5, on ROTOWIRE Gold receives highest scores in terms of Grammaticality, which is not unexpected. ENT comes close, achieving better scores than NCP+CC and TEMPL, even though our model only enhances the coherence of the output. Participants find ENT on par with Gold on Coherence and better than NCP+CC and TEMPL whose output is stilted and exhibits no variability. In terms of Conciseness, TEMPL is rated best, which is expected since it does not contain any duplication, the presented facts are mutually exclusive; ENT is comparable to NCP+CC and better than Gold. As far as MLB is concerned, ENT achieves highest scores on Grammaticality and Coherence. It is rated high on Conciseness also, second only to TEMPL whose scores are lowest on Grammaticality and Coherence. Perhaps surprisingly, Gold is rated lower than ENT on all three metrics; we hypothesize that participants find Gold’s output too verbose compared to the other systems. Recall that MLB gold summaries are relative long, the average length is 542 tokens compared to ROTOWIRE whose summaries are almost half as long (see Table 1). The average length of output summaries for ENT is 327 tokens. Taken together, our results show that ENT performs better than comparison systems on both ROTOWIRE and MLB. Compared to NCP+CC, it is conceptually simpler and more portable, as it does not rely on content plans which have to be extracted via an IE system which must be reconfigured for new datasets and domains. 8 Conclusions In this work we presented a neural model for datato-text generation which creates entity-specific representations (that are dynamically updated) and generates text using hierarchical attention over the input table and entity memory. Extensive automatic and human evaluation on two benchmarks, ROTOWIRE and the newly created MLB, show that our model outperforms competitive baselines and manages to generate plausible output which humans find coherent, concise, and factually correct. However, we have only scratched the surface; future improvements involve integrating content planning with entity modeling, placing more emphasis on play-by-play, and exploiting dependencies across input tables. Acknowledgments We would like to thank Adam Lopez for helpful discussions. We acknowledge the financial support of the European Research Council (Lapata; award number 681760). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, California. Antoine Bosselut, Omer Levy, Ari Holtzman, Corin Ennis, Dieter Fox, and Yejin Choi. 2018. Simulating action dynamics with neural process networks. In Proceedings of the 6th International Conference on Learning Representations, ICLR 2018, Vancouver, Canada. Marc Cavazza and Fred Charles. 2005. Dialogue generation in character-based interactive storytelling. In Proceedings of the 1st Artificial Intelligence and Interactive Digital Entertainment Conference, pages 21–26, Marina del Rey, California. Marc Cavazza, Fred Charles, and Steven J Mead. 2002. Character-based interactive storytelling. IEEE Intelligent Systems, 17(4):17–24. 2032 Wallace L. Chafe. 1976. Givenness, contrastiveness, definiteness, subjects, topics, a nd point of view. In Charles N. Li, editor, Subject and topic, pages 25– 55. Academic Press, New York. Elizabeth Clark, Yangfeng Ji, and Noah A. Smith. 2018. Neural text generation in stories using entity representations as context. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2250–2260, New Orleans, Louisiana. Herbert H. Clark and Susan E. Haviland. 1977. Comprehension and the given- new conract. In Roy O. Freedle, editor, Discourse production and comprehension, pages 1–39. Ablex, Norwood, NewJersey. John C. Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159. Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. J. Artif. Intell. Res., 61:65–170. Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203–225. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 140– 149, Berlin, Germany. M. A. K. Halliday and Ruqaiya Hasan. 1976. Cohesion in English. Longman, London. Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2017. Tracking the world state with recurrent entity networks. In Proceedings of the 5th International Conference on Learning Representations, ICLR 2017, Toulon, France. OpenReview.net. Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A. Smith. 2017. Dynamic entity representations in neural language models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1831– 1840, Copenhagen, Denmark. Nikiforos Karamanis, Massimo Poesio, Chris Mellish, and Jon Oberlander. 2004. Evaluating centeringbased metrics of coherence. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04). Lauri Karttunen. 1976. Discourse referents. In James D. McCawley, editor, Syntax and Semantics: Notes from the Linguistic Underground, volume 7, pages 363–86. Academic Press, New York. Rodger Kibble and Richard Power. 2004. Optimizing referential coherence in text generation. Computational Linguistics, 30(4):401–416. Chlo´e Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 329–339, Austin, Texas. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67–72, Vancouver, Canada. Susumu Kuno. 1972. Functional sentence perspective. Linguistic Inquiry, 3:269–320. R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203–1213, Austin, Texas. Jordan J Louviere, Terry N Flynn, and Anthony Alfred John Marley. 2015. Best-worst scaling: Theory, methods and applications. Cambridge University Press. Jordan J Louviere and George G Woodworth. 1991. Best-worst scaling: A model for the largest difference judgments. University of Alberta: Working Paper. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. William C. Mann and Sandra A. Thomson. 1988. Rhetorical structure theory. Text, 8(3):243–281. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016. What to talk about and how? selective generation using lstms with coarse-to-fine alignment. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 720–730, San Diego, California. Bryan Orme. 2009. Maxdiff analysis: Simple counting, individual-level logit, and hb. Sawtooth Software. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. 2033 Laura Perez-Beltrachini and Mirella Lapata. 2018. Bootstrapping generators from noisy data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1516–1527, New Orleans, Louisiana. Ellen Prince. 1981. Toward a taxonomy of givennew information. In Peter Cole, editor, Radical Pragmatics, pages 223–255. Academic Press, New York/London. Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-text generation with content selection and planning. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, Honolulu, Hawaii. Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Language Engineering, 3(1):57–87. Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge University Press, New York, NY. Jacques Robin. 1994. Revision-based generation of Natural Language Summaries providing historical Background. Ph.D. thesis, Ph. D. thesis, Columbia University. Donia Scott and Clarisse Sieckenius de Souza. 1990. Getting the message across in RST-based text generation. In Robert Dale, Chris Mellish, and Michael Zock, editors, Current Research in Natural Language Generation, pages 47–73. Academic Press, New York. Kumiko Tanaka-Ishii, Koiti Hasida, and Itsuki Noda. 1998. Reactive content selection in the generation of real-time soccer commentary. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2, pages 1282–1288, Montreal, Quebec, Canada. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2692–2700. Curran Associates, Inc. Marilyn A Walker, Ricky Grant, Jennifer Sawyer, Grace I Lin, Noah Wardrip-Fruin, and Michael Buell. 2011. Perceived or not perceived: Film character models for expressive NLG. In International Conference on Interactive Digital Storytelling, pages 109–121. Springer. Ronald J. Williams and Jing Peng. 1990. An efficient gradient-based algorithm for on-line training of recurrent network trajectories. Neural Computation, 2(4):490–501. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2253–2263, Copenhagen, Denmark. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489, San Diego, California. A Appendix Hyperparameters Table 6 contains the hyperparameters used for our ENT model on the ROTOWIRE and MLB datasets. Results on the Development Set Table 7 (top) shows results on the ROTOWIRE development set for our dynamic entity memory model (ENT), the best system of Wiseman et al. (2017) (WS-2017) which is an encoder-decoder model with conditional copy, the template generator (TEMPL), our implementation of encoder-decoder model with conditional copy (ED+CC), and NCP+CC (Puduppully et al., 2019). We see that ENT achieves scores comparable to NCP+CC, but performs better on the metrics of RG precision, CS precision, and CO. Table 7 (bottom) also presents our results on MLB. ENT achieves highest BLEU amongst all models and highest CS recall and RG count amongst neural models. Qualitative Examples Tables 8 and 9 contain examples of model output for ROTOWIRE and MLB, respectively. Because it conditions on entities, ENT is able to produce text displaying nominal coreference compared to other models. 2034 ROTOWIRE MLB Word Embeddings 600 300 Hidden state size 600 600 Entity memory size 300 300 LSTM Layers 2 1 Input Feeding Yes Yes Dropout 0.3 0.3 Optimizer Adagrad Adagrad Initial learning rate 0.15 0.15 Learning rate decay 0.97 0.97 Epochs 25 25 BPTT size 100 100 Batch size 5 12 Inference beam size 5 5 Table 6: Hyperparameters for ROTOWIRE and MLB. RW RG CS CO BLEU # P% P% R% DLD% TEMPL 54.29 99.92 26.61 59.16 14.42 8.51 WS-2017 23.95 75.10 28.11 35.86 15.33 14.57 ED+CC 22.68 79.40 29.96 34.11 16.00 14.00 NCP+CC 33.88 87.51 33.52 51.21 18.57 16.19 ENT 31.84 91.97 36.65 48.18 19.68 15.97 MLB RG CS CO BLEU # P% P% R% DLD% TEMPL 59.93 97.96 22.82 68.46 10.64 3.81 ED+CC 18.69 92.65 62.29 51.36 25.93 9.55 NCP+CC 17.70 88.01 59.76 55.23 26.87 9.43 ENT 21.32 88.16 57.36 61.50 24.87 11.13 Table 7: Results on ROTOWIRE (RW) and MLB development sets using relation generation (RG) count (#) and precision (P%), content selection (CS) precision (P%) and recall (R%), content ordering (CO) in normalized Damerau-Levenshtein distance (DLD%), and BLEU. System Summary Template The Atlanta Hawks (44–30) defeated the Detroit Pistons (39–35) 112–95. Paul Millsap scored 23 points (8–13 FG, 3–4 3PT, 4–5 FT) to go with 9 rebounds. Tobias Harris scored 21 points (10–20 FG, 1–3 3PT, 0–0 FT) to go with 10 rebounds. Andre Drummond scored 19 points (7–11 FG, 0–0 3PT, 5–9 FT) to go with 17 rebounds. Kent Bazemore scored 17 points (7–9 FG, 3–5 3PT, 0–0 FT) to go with 4 rebounds. Aron Baynes scored 15 points (5–6 FG, 0–0 3PT, 5–6 FT) to go with 7 rebounds. Al Horford scored 13 points (6–15 FG, 1–5 3PT, 0–0 FT) to go with 5 rebounds. The Atlanta Hawks’ next game will be at home against the Dallas Mavericks, while the Detroit Pistons will travel to play the Bulls. NCP+CC The Atlanta Hawks (44–30) defeated the Detroit Pistons (39–35) 112–95 on Wednesday at the Palace of Auburn Hills. The Hawks came into this game riding a three–game losing streak and it was clear they did n’t have it all on the floor. Paul Millsap led the way for the Hawks with 23 points (8–13 FG, 3–4 3Pt, 4–5 FT), along with nine rebounds, five assists and four steals, in 33 minutes. Al Horford chipped in 13 points (6–15 FG, 1–5 3Pt), and Kent Bazemore chipped in 17 points on 7–9 shooting, including 3–5 from deep. Kyle Korver chipped in 12 points (5–10 FG, 1–5 3Pt, 1–1 FT) and three rebounds in 29 minutes, while Tobias Harris chipped in 21 points (10–20 FG, 1–3 3Pt), 10 rebounds and five assists. Tobias Drummond had a double–double of his own with 19 points (7–11 FG, 5–9 FT) and 17 rebounds, along with two blocked shots, in 33 minutes. Andre Drummond had a double–double of his own, with 19 points (7–11 FG, 5–9 FT) and 17 rebounds, along with two blocked shots. The only other player to score in double digits for Detroit was Andre Drummond, who finished with 19 points (7–11 FG, 5–9 FT) and 17 rebounds, along with two blocked shots. The Pistons’ next game will be on the road against the Cleveland Cavaliers on Friday, while the Pistons will travel to Minnesota to play the Timberwolves on Wednesday. ENT The Atlanta Hawks (44–30) defeated the Detroit Pistons (39–35) 112–95 on Monday at the Palace of Auburn Hills. The Hawks got off to a quick start in this one, out–scoring the Pistons 27–15 in the first quarter alone. The Hawks were the superior shooters in this game, going 45 percent from the field and 38 percent from the three–point line, while the Pistons went 39 percent from the floor and just 24 percent from beyond the arc. The Hawks were led by the duo of Paul Millsap and Andre Drummond. Millsap finished with 23 points (8–13 FG, 3–4 3Pt, 4–5 FT), nine rebounds and four blocked shots, while Drummond had 19 points (7–11 FG, 5–9 FT), 17 rebounds and two blocked shots. It was his second double–double in a row, as he’s combined for 45 points and 19 rebounds over his last two games. He’s now averaging 15 points and 7 rebounds on the season. Jeff Teague was the other starter to reach double figures in points, as he finished with 12 points (3–13 FG, 2–3 3Pt, 4–4 FT) and 12 assists. The Hawks’ next game will be at home against the Cleveland Cavaliers on Friday, while the Pistons will travel to Los Angeles to play the Clippers on Friday. Table 8: Example output from the template-based system, NCP+CC (Puduppully et al., 2019) and our ENT model for ROTOWIRE. Recurring entities in the summaries are boldfaced and colorcoded, singletons are shown in black. 2035 System Summary Template The Tampa Bay Rays defeated the Oakland Athletics 13–4. Jason Isringhausen (0–0) allowed 0 runs, 1 hits and 0 walks in 1 innings. Jeff Niemann (4–3) allowed 4 runs, 8 hits and 0 walks in 8 innings. Sean Gallagher (1–2) allowed 9 runs, 3 hits and 5 walks in 2 1/3 innings. Kevin Cameron (0–0) allowed 0 runs, 0 hits and 1 walks in 2 innings. Gio Gonzalez (0–0) allowed 4 runs, 6 hits and 3 walks in 3 2/3 innings. Jason Bartlett hit 2 RBI single in the first. Orlando Cabrera hit 1 RBI homer in the third Athletics. Akinori Iwamura hit 2 RBI single in the third. Jason Bartlett hit 1 RBI triple in the fifth. Akinori Iwamura hit 1 RBI single in the fifth. Jason Bartlett hit 2 RBI single in the sixth. Jack Cust hit 2 RBI homer in the eighth Athletics. ED+CC ST. PETERSBURG, Fla.— Jason Bartlett had three hits and drove in five runs, and the Tampa Bay Rays beat the Oakland Athletics 13–4 on Tuesday night. Bartlett hit a two–run single during a six–run first inning off Gio Gonzalez (1–2), who gave up nine runs, three hits and three walks in 2 1/3 innings. Orlando Cabrera hit a solo homer in the third for the A’s, who have lost four of five. Orlando Cabrera hit a solo homer for the A’s, who have lost four of five. Orlando Cabrera hit a solo homer for the A’s, who have lost four of five. Jack Cust hit a two–run homer in the eighth for the A’s, who have lost four of five. Orlando Cabrera hit a solo homer for the A’s, who have lost four of five. Orlando Cabrera hit a solo homer for the A’s, who have lost three of four. Orlando Cabrera hit a solo homer in the third for the A’s, who have lost four of five. Orlando Cabrera hit a solo homer in the third for the A’s, who have lost three straight and five of six. Gallagher (1–2) gave up nine runs, four earned, and six hits in 2 1/3 innings. He walked three and struck out six. Gallagher (1–2) gave up nine runs, four earned, and six hits in 2 1/3 innings. He walked three and struck out six. Cust hit a two–run homer in the eighth. NCP+CC ST. PETERSBURG, Fla.— The Tampa Bay Rays got a big boost from Jason Bartlett’s grand slam. Bartlett drove in five runs, Jason Bartlett had five RBIs, Jason Bartlett had five RBIs and the Rays beat the Oakland Athletics 13–4 on Friday night. Jeff Niemann (4–3) allowed four runs, three earned, and eight hits in eight innings. Sean Gallagher (1–2) gave up a career–high nine runs in 2 1/3 innings. Jack Cust hit a two–run homer in the seventh and Jack Cust added a two–run homer for the A’s, who have lost four of five. Tampa Bay has lost four of five. The Rays have lost four of five. Tampa Bay has lost four of five. The Rays have lost four of five. Oakland has lost four of five. The Rays have lost four of five. Tampa Bay has lost four of five. The Rays have lost four of five. Tampa Bay has lost five of six. The Rays have lost four of five. Tampa Bay has lost five of six. The Rays have lost four of five. Tampa Bay has lost five of six overall. Oakland has lost five of six. The Rays have lost four of five. Tampa Bay has lost four of five. Oakland has lost four of five. Oakland has lost five of six. The Rays have lost four of five. Tampa Bay has lost four of five. Oakland has lost four of five. Oakland has lost five of six. Oakland has lost five of six. ENT ST. PETERSBURG, Fla.— Jason Bartlett and the Tampa Bay Rays finally found a way to beat the Oakland Athletics. Bartlett had a career–high five RBIs, Jeff Niemann pitched eight strong innings and the Rays beat the Oakland Athletics 13–4 on Tuesday night. Bartlett had a two–run single in the first and added a two–run single in the third to help the Rays take a 6–1 lead. Bartlett had a two–run single in the third and a two–run single in the sixth. Jack Cust hit a two–run homer in the eighth for the A’s, who have won five of six. The A’s scored six runs off Sean Gallagher (1–2), who gave up a career–high nine runs— seven earned— and three hits in 2 1/3 innings. Niemann (4–3) gave up four runs, three earned, and eight hits in eight innings. The right–hander struck out three and did not walk a batter for the second time this season. The right–hander is 4–0 in six career starts against the A’s. Orlando Cabrera hit a solo homer in the third for the A’s, who have lost four of five. Oakland starter Gio Gonzalez gave up four runs and six hits in 3 2/3 innings. The right–hander struck out six and walked three. The right–hander was coming off a 1–0 loss to the A’s in his previous start, when he gave up six runs in 4 1/3 innings of a 10–0 loss to the A’s. The A’s took a 1–0 lead in the first when Ben Zobrist drew a bases–loaded walk and Bartlett had a two–run single. Table 9: Example output from the template-based system, ED+CC, NCP+CC (Puduppully et al., 2019) and our ENT model for MLB. Recurring entities are boldfaced and colorcoded, singletons are shown in black.
2019
195
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2036–2046 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2036 Ensuring Readability and Data-fidelity using Head-modifier Templates in Deep Type Description Generation Jiangjie Chen†, Ao Wang†, Haiyun Jiang†, Suo Feng†, Chenguang Li†, Yanghua Xiao†‡∗ †Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, China ‡Shanghai Institute of Intelligent Electronics & Systems, Shanghai, China {jiangjiechen14, awang15, jianghy16, fengs17, cgli17, shawyh}@fudan.edu.cn Abstract A type description is a succinct noun compound which helps human and machines to quickly grasp the informative and distinctive information of an entity. Entities in most knowledge graphs (KGs) still lack such descriptions, thus calling for automatic methods to supplement such information. However, existing generative methods either overlook the grammatical structure or make factual mistakes in generated texts. To solve these problems, we propose a head-modifier template-based method to ensure the readability and data fidelity of generated type descriptions. We also propose a new dataset and two automatic metrics for this task. Experiments show that our method improves substantially compared with baselines and achieves stateof-the-art performance on both datasets. 1 Introduction Large-scale open domain KGs such as DBpedia (Auer et al., 2007), Wikidata (Vrandeˇci´c and Kr¨otzsch, 2014) and CN-DBpedia (Xu et al., 2017) are increasingly drawing the attention from both academia and industries, and have been successfully used in many applications that require background knowledge to understand texts. In KGs, a type description (Bhowmik and de Melo, 2018) is a kind of description which reflects the rich information of an entity with little cognitive efforts. A type description must be informative, distinctive and succinct to help human quickly grasp the essence of an unfamiliar entity. Compared to other kinds of data in a KG, types in entity-typing task (Shimaoka et al., 2016; Ren et al., 2016) are too general and not informative enough (e.g., when asked about “what is rue Cazotte?”, street in Paris, France is obviously more informative and distinctive than a type location.), and the fixed ∗Corresponding author: Yanghua Xiao. 3,' 3URSHUW\ 9DOXH 3 LQVWDQFH RI VWUHHW 3 QDPHG DIWHU -DFTXHV &D]RWWH 3 FRXQWU\ )UDQFH 3 ORFDWHG LQ WKH DGPLQLVWUDWLYH WHUULWRULDO HQWLW\ 3DULV 3 VKDUHV ERUGHU ZLWK UXH &KDUOHV1RGLHU $hed$ in $mod$ , $mod$ street in Paris , France Infobox of rue Cazotte Stage 2 Stage 1 Generate template Generate type description Figure 1: An example of the two-stage generation of our head-modifier template-based method. $hed$ and $mod$ are the placeholder for head and modifier components in the template. type set is too inflexible to expand; while infobox and abstract are too long with too much information, which increases cognitive burden. Type descriptions are useful for a wide range of applications, including question answering (e.g. what is rue Cazotte?), named entity disambiguation (e.g. Apple (fruit of the apple tree) vs Apple (American technology company)), taxonomy enrichment, etc. However, many entities in current open-domain KGs still lack such descriptions. For example, in DBpedia and CN-DBpedia respectively, there are only about 21% and 1.8% entities that are provided with such descriptions1. Essentially, a type description is a noun compound, which follows a grammatical rule called head-modifier rule (Hippisley et al., 2005; Wang et al., 2014). It always contains a head component (also head words or heads), and usually contains a modifier component (also modifier words or modifiers). The head component representing the type 1According to DBpedia 2016-10 dump and CN-DBpedia 2015-07 dump. 2037 information of the entity makes it distinctive from entities of other types; the modifier component limits the scope of that type, making it more finegrained and informative. For example, in street in Paris, France, the head word street indicates that it is a street, and the modifier words Paris and France indicate the street is located in Paris, France. Due to the low recall and limited patterns of extractive methods (Hearst, 1992), generative methods are more suitable to acquire more type descriptions. Generally, there are several challenges in generating a type description from an infobox: 1) it must be grammatically correct to be readable, given that a trivial mistake could lead to a syntax error (e.g. street with Paris, France); 2) it must guarantee the data fidelity towards input infobox, e.g., the system shouldn’t generate street in Germany for a French street; 3) its heads must be the correct types for the entity, and a mistake in heads is more severe than in modifiers, e.g., in this case, river in France is much worse than street in Germany. We argue that the head-modifier rule is crucial to ensure readability and data-fidelity in type description generation. However, existing methods pay little attention to it. Bhowmik and de Melo (2018) first propose a dynamic memorybased generative network to generate type descriptions from infobox in a neural manner. They utilize a memory component to help the model better remember the training data. However, it tends to lose the grammatical structure of the output, as it cannot distinguish heads from modifiers in the generation process. Also, it cannot handle the outof-vocabulary (OOV) problem, and many modifier words may be rare and OOV. Other data-totext (Wiseman et al., 2017; Sha et al., 2018) and text-to-text (Gu et al., 2016; Gulcehre et al., 2016; See et al., 2017) models equipped with copy mechanism alleviate OOV problem, without considering the difference between heads and modifiers, resulting in grammatical or factual mistakes. To solve the problems above, we propose a head-modifier template-based method. To the best of our knowledge, we are the first to integrate head-modifier rule into neural generative models. Our method is based on the observation that a head-modifier template exists in many type descriptions. For example, by replacing heads and modifiers with placeholders $hed$ and $mod$, the template for street in Paris, France is $hed$ in $mod$, $mod$, which is also the template for a series of similar type descriptions such as library in California, America, lake in Siberia, Russia, etc. Note that, the $hed$ and $mod$ can appear multiple times, and punctuation like a comma is also an important component of a template. Identifying the head and modifier components is helpful for providing structural and contextual cues in content selection and surface realization in generation, which correspond to data fidelity and readability respectively. As shown in Fig.1, the model can easily select the corresponding properties and values and organize them by the guidance of the template. The head-modifier template is universal as the head-modifier rule exists in any noun compound in English, even in Chinese (Hippisley et al., 2005). Therefore, the templates are applicable for open domain KGs, with no need to design new templates for entities from other KGs. There are no existing head-modifier templates to train from, therefore we use the dependency parsing technique (Manning et al., 2014) to acquire templates in training data. Then, as presented in Fig.1, our method consists of two stages: in Stage 1, we use an encoder-decoder framework with an attention mechanism to generate a template; in Stage 2, we use a new encoderdecoder framework to generate a type description, and reuse previously encoded infobox and apply a copy mechanism to preserve information from source to target. Meanwhile, we apply another attention mechanism upon generated templates to control the output’s structure. We then apply a context gate mechanism to dynamically select contexts during decoding. In brief, our contributions2 in this paper include, 1) we propose a new head-modifier templatebased method to improve the readability and data fidelity of generating type descriptions, which is also the first attempt of integrating head-modifier rule into neural generative models; 2) we apply copy and context gate mechanism to enhance the model’s ability of choosing contents with the guidance of templates; 3) we propose a new dataset with two new automatic metrics for this task, and experiments show that our method achieves stateof-the-art performance on both datasets. 2https://github.com/Michael0134/HedModTmplGen 2038 Knowledge Graph EntityID: Q3447345 rue Cazotte ℎ" # ℎ# # … ℎ$ # ℎ%& # street P31 0 jacques P138 0 cazotte P138 1 … … … france P17 0 (a) Infobox Encoder ℎ$ " ℎ' " ℎ( " ℎ" " ℎ# " (c) Template Encoder )" # )# # )' # )$ # )( # )* # , $hed$ in $mod$ $mod$ EOS Infobox Attention (b)Template Decoder Value word Property ID Position )" " )# " )' " )$ " )( " )* " , street in paris france EOS (d)Description Decoder Infobox Attention Template Attention , $hed$ in $mod$ $mod$ Stage 1 Stage 2 Stage 1: infobox -> template Stage 2: infobox + template -> type description Figure 2: Overall architecture of our method. In Stage 1, the model generates a template from infobox of entity rue Cazotte (the entity can be found at Wikidata by EntityID), then in Stage 2 the model completes this template by reusing the infobox and generates a type description for this entity. 2 Method In this section, we demonstrate our method in detail. As shown in Fig.2, given an entity from Wikidata3 and its corresponding infobox, we split the generation process into two stages. In Stage 1, the model takes as input an infobox and generates a head-modifier template. In Stage 2, the model takes as input the previously encoded infobox and the output template, and produces a type description. Note that our model is trained in an end-toend manner. 2.1 Stage 1: Template Generation In this stage, we use an encoder-decoder framework to generate a head-modifier template of the type description. 2.1.1 Infobox Encoder Our model takes as input an infobox of an entity, which is a series of (property, value) pairs denoted as I. We then reconstruct them into a sequence of words to apply Seq2Seq learning. In order to embed structural information from the infobox into word embedding xi, following Lebret et al. (2016), we represent xi = [vxi; fxi; pxi] for the i-th word xi in the values, with the word em3www.wikidata.org 3URSHUW\ 9DOXH 3 LQVWDQFH RI VWUHHW 3QDPHG DIWHU -DFTXHV &D]RWWH 3 FRXQWU\ )UDQFH 3 ORFDWHG LQ WKH DGPLQLVWUDWLYH WHUULWRULDO HQWLW\ 3DULV 3 VKDUHV ERUGHU ZLWK UXH &KDUOHV 1RGLHU ZRUG SURSHUW\ SRVLWLRQ VWUHHW LQVWDQFHBRI  -DFTXHV QDPHGBDIWHU  &D]RWWH QDPHGBDIWHU  )UDQFH FRXQWU\  3DULV ORFDWHGBBHQWLW\  UXH VKDUHVBERUGHUBZLWK  &KDUOHV 1RGLHU VKDUHVBERUGHUBZLWK  Figure 3: An example of reconstructing a Wikidata infobox (left) into a sequence of words with property and position information (right). PN denotes a property ID in Wikidata. bedding vxi for xi, a corresponding property embedding fxi and the positional information embedding pxi, and [·; ·] stands for vector concatenation. For example, as shown in Fig.3, we reconstruct (named after, Jacques Cazotte) into Jacques with (named after, 0) and Cazotte with (named after, 1), as Jacques is the first token in the value and Cazotte is the second. Next, we concatenate the embedding of Jacques, named after and 0 as the reconstructed embedding for Jacques. Notice that, we have three separate embedding matrices for properties, value words and position, that is, even though the property country is the same string as the value country, they are not 2039 the same token. Then, we employ a standard GRU (Chung et al., 2014) to read the input X = {xi}Lx i=1, then produce a sequence of hidden states Hx = {h1 i }Lx i=1, which are shared in both stages, where Lx is the length of the input sequence. 2.1.2 Template Annotation nmod case appos punct root street in paris , france head stop word modifier modifier $hed$ in $mod$ , $mod$ NN IN NNP , NN ROOT Template: Type Description: stop word Figure 4: An example of extracting head-modifier template from type description by dependency parsing using Stanford CoreNLP toolkit. In this task, the type descriptions are diversified yet following the head-modifier rule. The Stage 1 in our model learns the templates from training data, but there are no existing templates for the template generation training. Therefore, we acquire head-modifier templates by using a dependency parser provided by Stanford CoreNLP (Manning et al., 2014). Specifically, a type description is formed by head words (or heads), modifier words (or modifiers) and conjunctions. In our work, we refer to words that are types as heads in a type description, so there could be multiple heads. For example, singer and producer in American singer, producer are both head words. During dependency parsing, the root of a noun compound is always a head word of the type description. Therefore, we acquire heads by finding the root and its parallel terms. The remaining words except conjunctions and stopwords are considered to be modifiers. We then obtain the template by substituting heads with $hed$ and modifiers with $mod$, as shown in Fig.4. 2.1.3 Template Decoder In template generation, the template decoder D1 takes as input the previous encoded hidden states Hx and produces a series of hidden states {s1 1, s1 2, ..., s1 Lx} and a template sequence T = {t1, t2, ..., tLt}, where Lt is the length of the generated template. As template generation is a relatively lighter and easier task, we apply a canonical attention decoder as D1, with GRU as the RNN unit. Formally, at each time step j, the decoder produces a context vector c1 j, c1 j = Lx X i=1 αijh1 i ; αij = η(s1 j−1, h1 i ) PLx k=1 η(s1 j−1, h1 i )) (1) where η(s1 j, h1 i ) is a relevant score between encoder hidden state h1 i and a decoder hidden state s1 j. Among many ways to compute the score, in this work, we apply general product (Luong et al., 2015) to measure the similarity between both: η(h1 i , s1 j−1) = h1⊤ i W1s1 j−1 (2) where W1 is a learnable parameter. Then the decoder state is updated by s1 j = GRU([tj−1; c1 j], s1 j−1). Finally, the results are fed into a softmax layer, from which the system produces tj. 2.2 Stage 2: Description Generation After Stage 1 is finished, the generated template sequence T and the infobox encoder hidden states Hx are fed into Stage 2 to produce the final type description. 2.2.1 Template Encoder As the template is an ordered sequence, we use a bidirectional (Schuster and Paliwal, 1997) GRU to encode template sequence into another series of hidden states Ht = {h2 i }Lt i=1. Then we fed both Ht and Hx to the description decoder for further refinement. 2.2.2 Description Decoder The description decoder D2 is a GRU-based decoder, which utilizes a dual attention mechanism: a canonical attention mechanism and a copy mechanism to attend over template representation Ht and infobox representation Hx respectively. This is because we need the model to preserve information from the source while maintaining the headmodifier structure learned from the templates. In detail, let s2 j be D2’s hidden state at time step j. The first canonical attention mechanism is similar to the one described in Section 2.1.3, except that the decoder hidden states are replaced and related learnable parameters are changed. By applying this, we obtain a context vector ct j of Ht and a context vector cx j of Hx. 2040 Then, we use context gates proposed by Tu et al. (2017) to dynamically balance the contexts from infobox, template, and target, and decide the ratio at which three contexts contribute to the generation of target words. Formally, we calculate the context gates g∗ j by gx j = σ(Wx ge(yj−1) + Ux gsj−1 + Cx gcx j ) gt j = σ(Wt ge(yj−1) + Ut gsj−1 + Ct gct j) (3) where W∗ g, U∗ g, C∗ g are all learnable parameters, σ is a sigmoid layer, and e(y) embeds the word y. After that, we apply a linear interpolation to integrate these contexts and update the decoder state: c2 j =(1 −gx j −gt j)(We(yj−1) + Us2 j−1)+ gx j C1cx j + gt jC2ct j s2 j =GRU([e(yj−1); c2 j], s2 j−1) (4) where W, U, C1, C2 are all learnable parameters. To conduct a sort of slot filling procedure and enhance the model’s ability of directly copying words from infobox, we further apply conditional copy mechanism (Gulcehre et al., 2016) upon Hx. As the produced words may come from the vocabulary or directly from the infobox , we assume a new decoding vocabulary V′ = V ∪{xi}Lx i=1, where V is the original vocabulary with the vocabulary size of N, and unk is the replacement for out-of-vocabulary words. Following Wiseman et al. (2017), the probabilistic function of yj is as follows: p(yj, zj|y<j, I, T ) = ( pcopy(yj|y<j, I, T )p(zj|y<j, I), zj = 0 pgen(yj|y<j, I, T )p(zj|y<j, I), zj = 1 (5) where zj is a binary variable deciding whether yj is copied from I or generated, and p(zj|·) is the switcher between copy and generate mode which is implemented as a multi-layer perceptron (MLP). pcopy(yj|·) and pgen(yj|·) are the probabilities of copy mode and generate mode respectively, which are calculated by applying softmax on copy scores φcopy and generation scores φgen. These scores are defined as follows: φgen(yj = v) = Wg[s2 j; c2 j], v ∈V ∪{unk} φcopy(yj = xi) = tanh(hx i Wc)s2 j, xi ∈V′ −V (6) where Wc, Wg are both learnable parameters. Therefore, a word is considered as a copied word if it appears in the value portion of the source infobox. 2.3 Learning Our model is able to be optimized in an end-toend manner and is trained to minimize the negative log-likelihood of the annotated templates T given infobox I and the ground truth type descriptions given T and I. Formally, L1 = − Lt X i=1 log p(ti|t<i, I) L2 = − Ly X i=1 log p(yi|y<i, I, T ) L = L1 + L2 (7) where L1 is the loss in Stage 1, L2 is the loss in Stage 2, and Ly is the length of the target. 3 Experiments In this section, we conduct several experiments to demonstrate the effectiveness of our method. 3.1 Datasets We conduct experiments on two English datasets sampled from Wikidata, which are referred to as Wiki10K and Wiki200K respectively. Wiki10K is the original dataset proposed by Bhowmik and de Melo (2018), which is sampled from Wikidata and consists of 10K entities sampled from the official RDF exports of Wikidata dated 201608-01. However, this dataset is not only too small to reveal the subtlety of models, but it’s also relatively imbalanced with too many human entities based on the property instance of. Therefore, we propose a new and larger dataset Wiki200K, which consists of 200K entities more evenly sampled from Wikidata dated 2018-10-01. Note that, in both Wiki10K and Wiki200K, we filter all the properties whose data type are not wikibase-item, wikibase-property or time according to Wikidata database reports4. KGs such as Wikidata are typically composed of semantic triples. A semantic triple is formed by a subject, a predicate, and an object, corresponding to entity, property and value in Wikidata. 4https://www.wikidata.org/wiki/Wikidata:Database reports/ List of properties/all 2041 We make sure that every entity from both datasets has at least 5 property-value pairs (or statement in Wikidata parlance) and an English type description. The basic statistics of the two datasets are demonstrated in Table 1. Then, we randomly divide two datasets into train, validation and test sets by the ratio of 8:1:1. Datasets Wiki10K Wiki200K # entities 10,000 200,000 # properties 480 900 vocabulary size 28,785 130,686 # avg statement 8.90 7.96 Copy(%) 88.24 71.30 Table 1: Statistics for both datasets, where “#” denotes the number counted, and avg is short for average. “Copy(%)” denotes the copy ratio in the golden type descriptions excluding stopwords, which is similar to the metric ModCopy defined in Section 3.2. 3.2 Evaluation Metrics Following the common practice, we evaluate different aspects of the generation quality with automatic metrics broadly applied in many natural language generation tasks, including BLEU (B-1, B-2) (Papineni et al., 2002), ROUGE (RG-L) (Lin, 2004), METEOR (Banerjee and Lavie, 2005) and CIDEr (Vedantam et al., 2015). BLEU measures the n-gram overlap between results and ground truth, giving a broad point of view regarding fluency, while ROUGE emphasizes on the precision and recall between both. METEOR matches human perception better and CIDEr captures human consensus. Nonetheless, these metrics depend highly on the comparison with ground truth, instead of the system’s input. In this task, the output may still be correct judging by input infobox even if it’s different from the ground truth. Therefore, we introduce two simple automatic metrics designed for this task to give a better perspective of the data fidelity of generated texts from the following aspects: • Modifier Copy Ratio (ModCopy). We evaluate the data fidelity regarding preserving source facts by computing the ratio of modifier words (that is, excluding stopwords and head words) in the type descriptions that are copied from the source. In detail, we roughly consider a word in a type description as a copied word if it shares a L-character (4 in our experiments) prefix with any word but stopwords in the values of source infobox. For example, modifier Japanese could be a copied modifier word from the fact (country, Japan). To clarify, the copy ratio of a type description can be calculated by #copied words #all words−#stopwords. The Modifier Copy Ratio measures to what extent the informative words are preserved in the modifiers of the model’s output. • Head Accuracy (HedAcc). For a type description, it is crucial to make sure that the head word is the right type of entity. Therefore, in order to give an approximate estimate of the data fidelity regarding head words, we also evaluate the head word’s accuracy in the output. Note that aside from ground truth, infobox is also a reliable source to provide candidate types. Specifically, in Wikidata, the values in instance of (P31) and subclass of (P279) are usually suitable types for an entity, though not every entity has these properties and these types could be too coarse-grained like human. Therefore, after dependency parsing, we count the head words in the output with heads from corresponding ground truth and values of corresponding infobox properties, then gives an accuracy of the heads of output. The Head Accuracy measures model’s ability of predicting the right type of the entity. 3.3 Baselines and Experimental Setup We compared our method with several competitive generative models. All models except DGN are implemented with the help of OpenNMT-py (Klein et al., 2017). Note that we use the same infobox reconstructing method described in Section 2.1.1 to apply Seq2Seq learning for all models except DGN since it has its own encoding method. The baselines include: • AttnSeq2Seq (Luong et al., 2015). AttnS2S is a standard RNN-based Seq2Seq model with an attention mechanism. • Pointer-Generator (See et al., 2017). PtrGen is originally designed for text summarization, providing a strong baseline with a copy mechanism. Note that, in order to make 2042 Wiki10K Model B-1 B-2 RG-L METEOR CIDEr ModCopy HedAcc AttnS2S 53.96 47.56 55.25 29.95 2.753 69.45 52.82 Ptr-Gen 64.24 57.11 65.37 36.42 3.536 83.88 67.92 Transformer 61.63 54.93 63.14 35.01 3.400 75.37 61.13 DGN 63.24 57.52 64.50 35.92 3.372 77.53 64.65 Our work 65.09 58.72 66.92 37.55 3.717 86.04 70.68 Wiki200K Model B-1 B-2 RG-L METEOR CIDEr ModCopy HedAcc AttnS2S 66.15 61.61 70.55 37.65 4.105 49.59 79.76 Ptr-Gen 70.13 66.21 75.21 41.38 4.664 58.27 85.38 Transformer 69.78 66.07 75.60 41.52 4.654 53.85 85.55 DGN 62.60 57.86 69.30 34.84 3.815 48.30 81.31 Our work 73.69 69.59 76.77 43.54 4.847 58.14 85.81 Table 2: Evaluation results of different models on both datasets. a fairer comparison with our model, we additionally equip Ptr-Gen with context gate mechanism so that it becomes a no-template version of our method. • Transformer (Vaswani et al., 2017). Transformer recently outperforms traditional RNN architecture in many NLP tasks, which makes it also a competitive baseline, even if it’s not specifically designed for this task. • DGN (Bhowmik and de Melo, 2018). DGN uses a dynamic memory based network with a positional encoder and an RNN decoder. It achieved state-of-the-art performance in this task. In experiments, we decapitalize all words and keep vocabularies at the size of 10,000 and 50,000 for Wiki10K and Wiki200K respectively, and use unk to represent other out-of-vocabulary words. For the sake of fairness, the hidden size of RNN (GRU in our experiments) and Transformer in all models are set to 256. The word embedding size is set to 256, and the property and position embedding sizes are both set to 128. During training, we use Adam (Kingma and Ba, 2014) as the optimization algorithm. 3.4 Results and Analysis The experimental results of metrics described in Section 3.2 are listed in Table 2. In general, our method achieves state-of-the-art performance over proposed baselines. As shown in the table, our method improves substantially compared with standard encoderdecoder models (AttnS2S and Transformer) and the previous state-of-the-art method (DGN). Interestingly, DGN is out-performed by Ptr-Gen in Wiki10K and by most of the models in the larger dataset Wiki200K. We also notice that Transformer performs much better on Wiki200K, which is most likely because of its learning ability through massive training data. These results further prove the necessity of proposing our new dataset. Among baselines, Ptr-Gen achieves relatively better results due to copy mechanism and context gate mechanism. These mechanisms give the model the ability to cope with the OOV problem and to directly preserve information from the source, which is important in this task. Note that, as described in Section 3.3, we enhance the Pointer-Generator to become a no-template version of our model, therefore the effect of the headmodifier template can be measured by comparing the results of these two methods. And the results demonstrate that our head-modifier template plays an important role in generating type descriptions. In terms of the two proposed metrics, we find these metrics roughly positively correlated with traditional metrics, which in a way justifies our metrics. These metrics provide interesting points of view on measuring generation quality. The performance on ModCopy indicates that methods (Ptr-Gen, ours) with copy mechanism improves data fidelity by copying facts from the source, and the template helps the model know where and how 2043 to copy. The performance on HedAcc demonstrates that our method is relatively better at predicting types for an entity, which in a way suggests the templates help the generated text maintain the head-modifier structure so that the head word is successfully parsed by the dependency parsing technique. Although, we notice that in Wiki200K, models perform relatively worse on ModCopy and better on HedAcc than in Wiki10K. This is most likely because the types of entities are finite, and more training data leads to more accuracy in predicting types. Due to the size of the dataset and the limit of vocabulary size, the factual information is harder to preserve in the output. This again proves the necessity of the new dataset. 3.4.1 Manual Evaluation In this task, the readability of the generated type description is mostly related to its grammatical correctness, which benefits from the headmodifier templates. Therefore, in order to measure the influence the templates make in terms of readability as well as how ModCopy (M.C.) and HedAcc (H.A.) correlate with manual judgment, we manually evaluate the generation from two aspects: Grammar Accuracy (G.A.) and Overall Accuracy (O.A.). In detail, Grammar Accuracy is the grammatical correctness judging by the grammar of the generated text alone; Overall Accuracy is the grammatical and de facto correctness of the generated type description given an infobox and the ground truth. Note that Overall Accuracy is always lower than or equal to Grammar Accuracy. In our experiment, we randomly select 200 pieces of data from the test set of Wiki200K, and provide the results of each method to the volunteers (who are all undergraduates) for manual evaluation. We make sure each result is evaluated by two volunteers so as to eliminate the influence of subjective factors to some extent. Model G.A. O.A. M.C. H.A. AttnS2S 92.25 50.50 51.53 80.27 Ptr-Gen 90.00 65.00 62.50 88.01 Transformer 95.25 58.00 55.70 89.67 DGN 89.50 56.00 47.29 81.37 Our work 96.50 66.25 61.32 90.29 Table 3: Results of manual evaluation as well as two proposed metrics. The results, as shown in Table 3, prove again the effectiveness of our method. Our method outperforms other baselines in term of Grammar Accuracy, which demonstrates that the model benefits from the head-modifier templates in term of readability by knowing “how to say it”. In particular, the templates improves the Grammar Accuracy substantially compared with Ptr-Gen. Results on the Overall Accuracy indicate that our method ensures readability as well as data-fidelity, which indicates that the model benefits from the templates by knowing “what to say”. As for the proposed metrics ModCopy and HedAcc, they are, in line with intuition, relatively positively correlated with human judgment in general. Also, notice that the statistics on both metrics are consistent with Table 2. 3.4.2 Effect of Templates We aim to investigate whether the model is able to correct itself if the template generated in Stage 1 deviates from the correct one. We select cases from Wiki10K test set to conduct experiments. During inference, we deliberately replace the template in Stage 2 to see if the generated text still complies with the given template or if the model will be able to generate the right type description. Entity ID: Q859415 Gold: commune in paris, france Template 1: $hed$ in $mod$, $mod$ Output 1: commune in paris, france Template 2: $mod$ $hed$ Output 2: commune in france Template 3: $hed$ $mod$ Output 3: commune Entity ID: Q18758590 Gold: italian architect and teacher Template 1: $mod$ $hed$ and $hed$ Output 1: italian architect and architect Template 2: $mod$ $hed$ Output 2: italian architect Template 3: $hed$ $mod$ and $mod$ Output 3: italy and teacher Figure 5: Examples of replacing templates. Template 1’s are the inital generated templates, while the remaining ones are produced by the authors. We use bold to denote the heads and use italic red to denote mistaken words. The experimental results, as presented in Fig. 5, show our method’s resilience against mistaken templates. In the first case: 1) the replaced template Template 2 is obviously inconsistent with the golden template Template 1 (though 2044 it’s also a possible template for other type descriptions), yet the model still manages to generate a type description though paris is lost; 2) Template 3 doesn’t have the conjunction in, which causes confusion but the model still successfully predicts the right head. In the second case, the model originally generates repetitive heads: 1) in Template 2, we delete the second $hed$ in Template 1, and as a result, the model successfully generates a correct though incomplete output; 2) while Template 3 is completely wrong judging by the head-modifier rule, and as a result Output 3 is lost in readability. Nevertheless, due to the fact that the number of type descriptions is infinite yet the number of head-modifier templates is rather finite, the model can hardly generate a template that’s completely wrong, therefore this scenario rarely happens in real life. Still, the model tries to maintain a similar structure and successfully keeps data fidelity by predicting teacher, and preserving italy. 4 Related Work There has been extensive work on mining entitytype pairs (i.e. isA relations) automatically. Hearst (1992) uses a pattern-based method to extract isA pairs directly from free text with Hearst Patterns (e.g., NP1 is a NP2; NP0 such as {NP1, NP2, ..., (and|or)}NPn) from which taxonomies can be induced (Poon and Domingos, 2010; Velardi et al., 2013; Bansal et al., 2014). But these methods are limited in patterns, which often results in low recall and precision. The most related line of work regarding predicting types for entities is entity-typing (Collins and Singer, 1999; Jiang and Zhai, 2006; Ratinov and Roth, 2009), which aims to assign types such as people, location from a fixed set to entity mentions in a document, and most of them model it a classification task. However, the types, even for those aiming at fine-grained entity-typing (Shimaoka et al., 2016; Ren et al., 2016; Anand et al., 2017) are too coarse-grained to be informative about the entity. Also, the type set is too small and inflexible to meet the need for an everexpanding KG. In this task, the structured infobox is a source more suitable than textural data compared with text summarization task (Gu et al., 2016; See et al., 2017; Cao et al., 2018), because not every entity in a KG possesses a paragraph of description. For example, in CN-DBpedia (Xu et al., 2017), which is one of the biggest Chinese KG, only a quarter of the entities have textual descriptions, yet almost every entity has an infobox. Natural language generation (NLG) from structured data is a classic problem, in which many efforts have been made. A common approach is to use hand-crafted templates (Kukich, 1983; McKeown, 1992), but the acquisition of these templates in a specific domain is too costly. Some also focus on automatically creating templates by clustering sentences and then use hand-crafted rules to induce templates (Angeli et al., 2010; Konstas and Lapata, 2013). Recently with the rise of neural networks, many methods generate text in an endto-end manner (Liu et al., 2017; Wiseman et al., 2017; Bhowmik and de Melo, 2018). However, they pay little attention to the grammatical structure of the output which may be ignored in generating long sentences, but it is crucial in generating short noun compounds like type descriptions. 5 Conclusion and Future Work In this paper, we propose a head-modifier template-based type description generation method, powered by a copy mechanism and context gating mechanism. We also propose a larger dataset and two metrics designed for this task. Experimental results demonstrate that our method achieves state-of-the-art performance over baselines on both datasets while ensuring data fidelity and readability in generated type descriptions. Further experiments regarding the effect of templates show that our model is not only controllable through templates, but resilient against wrong templates and able to correct itself. Aside from such syntax templates, in the future, we aim to explore how semantic templates contribute to type description generation. 6 Acknowledgements We thank the anonymous reviewers for valuable comments. This work was supported by National Key R&D Program of China (No.2017YFC0803700), Shanghai Municipal Science and Technology Major Project (Grant No 16JC1420400). 2045 References Ashish Anand, Amit Awekar, et al. 2017. Fine-grained entity type classification by jointly learning representations and label embeddings. arXiv preprint arXiv:1702.06709. Gabor Angeli, Percy Liang, and Dan Klein. 2010. A simple domain-independent probabilistic approach to generation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 502–512. Association for Computational Linguistics. S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web, pages 722–735. Springer. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72. Mohit Bansal, David Burkett, Gerard De Melo, and Dan Klein. 2014. Structured learning for taxonomy induction with belief propagation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1041–1051. Rajarshi Bhowmik and Gerard de Melo. 2018. Generating fine-grained open vocabulary entity type descriptions. In Proceedings of ACL 2018. Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018. Retrieve, rerank and rewrite: Soft template based neural summarization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 152–161. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Michael Collins and Yoram Singer. 1999. Unsupervised models for named entity classification. In 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. arXiv preprint arXiv:1603.08148. Marti A Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th conference on Computational linguisticsVolume 2, pages 539–545. Association for Computational Linguistics. Andrew Hippisley, David Cheng, and Khurshid Ahmad. 2005. The head-modifier principle and multilingual term extraction. Natural Language Engineering, 11(2):129–157. Jing Jiang and ChengXiang Zhai. 2006. Exploiting domain structure for named entity recognition. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 74–81. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL. Ioannis Konstas and Mirella Lapata. 2013. A global model for concept-to-text generation. Journal of Artificial Intelligence Research, 48:305–346. Karen Kukich. 1983. Design of a knowledge-based report generator. In Proceedings of the 21st annual meeting on Association for Computational Linguistics, pages 145–150. Association for Computational Linguistics. R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. arXiv preprint arXiv:1603.07771. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2017. Table-to-text generation by structure-aware seq2seq learning. arXiv preprint arXiv:1711.09724. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60. Kathleen McKeown. 1992. Text generation. Cambridge University Press. 2046 Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Hoifung Poon and Pedro Domingos. 2010. Unsupervised ontology induction from text. In Proceedings of the 48th annual meeting of the Association for Computational Linguistics, pages 296–305. Association for Computational Linguistics. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning, pages 147– 155. Association for Computational Linguistics. Xiang Ren, Wenqi He, Meng Qu, Lifu Huang, Heng Ji, and Jiawei Han. 2016. Afet: Automatic fine-grained entity typing by hierarchical partial-label embedding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1369–1378. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Abigail See, Peter J Liu, and Cristopher D Manning. 2017. Get To The Point: Summarization with Pointer-Generator Networks . pages 1–20. Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, and Zhifang Sui. 2018. Orderplanning neural text generation from structured data. In Thirty-Second AAAI Conference on Artificial Intelligence. Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2016. Neural architectures for fine-grained entity type classification. arXiv preprint arXiv:1606.01341. Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, and Hang Li. 2017. Context gates for neural machine translation. Transactions of the Association for Computational Linguistics, 5:87–99. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566–4575. Paola Velardi, Stefano Faralli, and Roberto Navigli. 2013. Ontolearn reloaded: A graph-based algorithm for taxonomy induction. Computational Linguistics, 39(3):665–707. Denny Vrandeˇci´c and Markus Kr¨otzsch. 2014. Wikidata: a free collaborative knowledgebase. Communications of the ACM, 57(10):78–85. Zhongyuan Wang, Haixun Wang, and Zhirui Hu. 2014. Head, modifier, and constraint detection in short texts. In 2014 IEEE 30th International Conference on Data Engineering, pages 280–291. IEEE. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in Data-to-Document Generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2253–2263, Stroudsburg, PA, USA. Association for Computational Linguistics. Bo Xu, Yong Xu, Jiaqing Liang, Chenhao Xie, Bin Liang, Wanyun Cui, and Yanghua Xiao. 2017. Cndbpedia: A never-ending chinese knowledge extraction system. In International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, pages 428–438. Springer.
2019
196
Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation Shuming Ma,1,3 Pengcheng Yang,1,2 Tianyu Liu,1 Peng Li,3 Jie Zhou,3 Xu Sun1,2 1MOE Key Lab of Computational Linguistics, School of EECS, Peking University 2Deep Learning Lab, Beijing Institute of Big Data Research, Peking University 3Pattern Recognition Center, WeChat AI, Tencent Inc, China {shumingma,yang pc,tianyu0421,xusun}@pku.edu.cn {patrickpli,withtomzhou}@tencent.com Abstract Table-to-text generation aims to translate the structured data into the unstructured text. Most existing methods adopt the encoder-decoder framework to learn the transformation, which requires large-scale training samples. However, the lack of large parallel data is a major practical problem for many domains. In this work, we consider the scenario of low resource table-to-text generation, where only limited parallel data is available. We propose a novel model to separate the generation into two stages: key fact prediction and surface realization. It first predicts the key facts from the tables, and then generates the text with the key facts. The training of key fact prediction needs much fewer annotated data, while surface realization can be trained with pseudo parallel corpus. We evaluate our model on a biography generation dataset. Our model can achieve 27.34 BLEU score with only 1, 000 parallel data, while the baseline model only obtain the performance of 9.71 BLEU score.1 1 Introduction Table-to-text generation is to generate a description from the structured table. It helps readers to summarize the key points in the table, and tell in the natural language. Figure 1 shows an example of table-to-text generation. The table provides some structured information about a person named “Denise Margaret Scott”, and the corresponding text describes the person with the key information in the table. Table-to-text generation can be applied in many scenarios, including weather report generation (Liang et al., 2009), NBA news writing (Barzilay and Lapata, 2005), biography generation (Dubou´e and McKeown, 2002; Lebret et al., 2016), and so on. Moreover, table-to-text genera1The codes are available at https://github.com/ lancopku/Pivot. Denise Margaret Scott Born 24 April 1955 Melbourne, Victoria Nationality Australian Other names Scotty Occupation Comedian, actor, television and radio presenter Known for Studio 10 Partner(s) John Lane Children 2 Denise Margaret Scott (born 24 April 1955) is an Australian comedian, actor and television presenter. Denise Margaret Scott 24 April 1955 Australian Comedian, actor, television and radio presenter Key Fact Prediction Surface Realization Figure 1: An example of table-to-text generation, and also a flow chart of our method. tion is a good testbed of a model’s ability of understanding the structured knowledge. Most of the existing methods for table-totext generation are based on the encoder-decoder framework (Sutskever et al., 2014; Bahdanau et al., 2014). They represent the source tables with a neural encoder, and generate the text word-byword with a decoder conditioned on the source table representation. Although the encoder-decoder framework has proven successful in the area of natural language generation (NLG) (Luong et al., 2015; Chopra et al., 2016; Lu et al., 2017; Yang et al., 2018), it requires a large parallel corpus, and is known to fail when the corpus is not big enough. Figure 2 shows the performance of a table-to-text model trained with different number of parallel data under the encoder-decoder framework. We can see that the performance is poor when the parallel data size is low. In practice, we lack the large parallel data in many domains, and it is expensive to construct a high-quality parallel corpus. This work focuses on the task of low resource table-to-text generation, where only limited paralarXiv:1908.03067v1 [cs.CL] 8 Aug 2019 0 10000 20000 30000 40000 50000 60000 Parallel Data Size 5 10 15 20 25 30 35 BLEU Figure 2: The BLEU scores of the a table-to-text model trained with different number of parallel data under the encoder-decoder framework on the WIKIBIO dataset. lel data is available. Some previous work (Puduppully et al., 2018; Gehrmann et al., 2018) formulates the task as the combination of content selection and surface realization, and models them with an end-to-end model. Inspired by these work, we break up the table-to-text generation into two stages, each of which is performed by a model trainable with only a few annotated data. Specifically, it first predicts the key facts from the tables, and then generates the text with the key facts, as shown in Figure 1. The two-stage method consists of two separate models: a key fact prediction model and a surface realization model. The key fact prediction model is formulated as a sequence labeling problem, so it needs much fewer annotated data than the encoder-decoder models. According to our experiments, the model can obtain 87.92% F1 score with only 1, 000 annotated data. As for the surface realization model, we propose a method to construct a pseudo parallel dataset without the need of labeled data. In this way, our model can make full use of the unlabeled text, and alleviate the heavy need of the parallel data. The contributions of this work are as follows: • We propose to break up the table-to-text generation into two stages with two separate models, so that the model can be trained with fewer annotated data. • We propose a method to construct a pseudo parallel dataset for the surface realization model, without the need of labeled data. • Experiments show that our proposed model can achieve 27.34 BLEU score on a biography generation dataset with only 1, 000 tabletext samples. 2 PIVOT: A Two-Stage Model In this section, we introduce our proposed twostage model, which we denote as PIVOT. We first give the formulation of the table-to-text generation and the related notations. Then, we provide an overview of the model. Finally, we describe the two models for each stage in detail. 2.1 Formulation and Notations Suppose we have a parallel table-to-text dataset P with N data samples and an unlabeled text dataset U with M samples. Each parallel sample consists of a source table T and a text description y = {y1, y2, · · · , yn}. The table T can be formulated as K records T = {r1, r2, r3, · · · , rK}, and each record is an attribute-value pair rj = (aj, vj). Each sample in the unlabeled text dataset U is a piece of text ¯y = {¯y1, ¯y2, · · · , ¯yn}. Formally, the task of table-to-text generation is to take the structured representations of table T = {(a1, v1), (a2, v2), · · · , (am, vm)} as input, and output the sequence of words y = {y1, y2, · · · , yn}. 2.2 Overview Figure 3 shows the overview architecture of our proposed model. Our model contains two stages: key fact prediction and surface realization. At the first stage, we represent the table into a sequence, and use a table-to-pivot model to select the key facts from the sequence. The table-topivot model adpots a bi-directional Long Shortterm Memory Network (Bi-LSTM) to predict a binary sequence of whether each word is reserved as the key facts. At the second stage, we build a sequence-to-sequence model to take the key facts selected in the first stage as input and emit the table description. In order to make use of the unlabeled text corpus, we propose a method to construct pseudo parallel data to train a better surface realization model. Moreover, we introduce a denoising data augmentation method to reduce the risk of error propagation between two stages. 2.3 Preprocessing: Key Fact Selection The two stages are trained separately, but we do not have the labels of which words in the table are the key facts in the dataset. In this work, we define the co-occurrence facts between the table and the 𝑥1 𝑥2 … 𝑥3 ℎ1 ℎ2 ℎ3 … Denise Margaret Scott Born 24 April 1955 Melbourne, Victoria Nationality Australian Other names Scotty Occupation Comedian, actor, television and radio presenter Known for Studio 10 Partner(s) John Lane Children 2 𝑥𝑚 ℎ𝑚 1 0 1 … 0 Denise Margaret Scott Born 24 April 1955 Melbourne, Victoria Nationality Australian Occupation Comedian, actor, television and radio presenter ҧ𝑥1 ҧ𝑥2 ҧ𝑥𝑘 … ℎ1 ℎ2 … ℎ𝑘 Attention 𝑠1 𝑠2 𝑠𝑛 … 𝑦1 𝑦2 … 𝑦𝑛 Denise Margaret Scott (born 24 April 1955) is an Australian comedian, actor and television presenter. Key Fact Prediction Surface Realization Figure 3: The overview of our model. For illustration, the surface realization model is a vanilla Seq2Seq, while it can also be a Transformer in our implementation. text as the key facts, so we can label the key facts automatically. Algorithm 1 illustrates the process of automatically annotating the key facts. Given a table and its associated text, we enumerate each attribute-value pair in the table, and compute the word overlap between the value and the text. The word overlap is defined as the number of words that are not stop words or punctuation but appear in both the table and the text. We collect all values that have at least one overlap with the text, and regard them as the key facts. In this way, we can obtain a binary sequence with the 0/1 label denoting whether the values in the table are the key facts. The binary sequence will be regarded as the supervised signal of the key fact prediction model, and the selected key facts will be the input of the surface realization model. 2.4 Stage 1: Key Fact Prediction The key fact prediction model is a Bi-LSTM layer with a multi-layer perceptron (MLP) classifier to determine whether each word is selected. In order to represent the table, we follow the previous work (Liu et al., 2018) to concatenate all the words in the values of the table into a word sequence, and each word is labeled with its attribute. In this way, the table is represented as two sequences: the value sequence {v1, v2, · · · , vm} and the attribute sequence {a1, a2, · · · , am}. A word embedding and an attribute embedding are used to transform Algorithm 1 Automatic Key Fact Annotation Input: A parallel corpora P = {(xi, yi)}, where xi is a table, and yi is a word sequence. 1: Initial the selected key fact list W = [] 2: for each sample (x, y) in the parallel dataset P do 3: x = {(v1, a1), (v2, a2), · · · , (vm, am)} 4: y = {y1, y2, · · · , yn} 5: Initial the selected attribute set A = {} 6: Initial the selected key fact list Wi = [] 7: for each attribute-value pair (vi, ai) in table x do 8: if vi in y And vi is not stop word then 9: Append attribute ai into attribute set A 10: end if 11: if ai in A then 12: Append value vi into key fact list Wi 13: end if 14: end for 15: Collect the key fact list W += Wi 16: end for Output: The selected key fact list W two sequences into the vectors. Following (Lebret et al., 2016; Liu et al., 2018), we introduce a position embedding to capture structured information of the table. The position information is represented as a tuple (p+ w, pw), which includes the positions of the token w counted from the beginning and the end of the value respectively. For example, the record of “(Name, Denise Margaret Scott)” is represented as “({Denise, Name, 1, 3}, {Margaret, Name, 2, 2}, {Scott, Name, 3, 1})”. In this way, each token in the table has an unique feature embedding even if there exists two same words. Finally, the word embedding, the attribute embedding, and the position embedding are concatenated as the input of the model x. Table Encoder: The goal of the source table encoder is to provide a series of representations for the classifier. More specifically, the table encoder is a Bi-LSTM: ht = BiLSTM(xt,⃗ht−1, ⃗ ht+1) (1) where ⃗ht and ⃗ ht are the forward and the backward hidden outputs respectively, ht is the concatenation of ⃗ht and ⃗ ht, and xt is the input at the t-th time step. Classifier: The output vector ht is fed into a MLP classifier to compute the probability distribution of the label p1(lt|x) p1(lt|x) = softmax(Wcht + bc) (2) where Wc and bc are trainable parameters of the classifier. 2.5 Stage 2: Surface Realization The surface realization stage aims to generate the text conditioned on the key facts predicted in Stage 1. We adpot two models as the implementation of surface realization: the vanilla Seq2Seq and the Transformer (Vaswani et al., 2017). Vanilla Seq2Seq: In our implementation, the vanilla Seq2Seq consists of a Bi-LSTM encoder and an LSTM decoder with the attention mechanism. The Bi-LSTM encoder is the same as that of the key fact prediction model, except that it does not use any attribute embedding or position embedding. The decoder consists of an LSTM, an attention component, and a word generator. It first generates the hidden state st: st = f(yt−1, st−1) (3) where f(·, ·) is the function of LSTM for one time step, and yt−1 is the last generated word at time step t −1. Then, the hidden state st from LSTM is fed into the attention component: vt = Attention(st, h) (4) where Attention(·, ·) is the implementation of global attention in (Luong et al., 2015), and h is a sequence of outputs by the encoder. Given the output vector vt from the attention component, the word generator is used to compute the probability distribution of the output words at time step t: p2(yt|x) = softmax(Wgvt + bg) (5) where Wg and bg are parameters of the generator. The word with the highest probability is emitted as the t-th word. Transformer: Similar to vanilla Seq2Seq, the Transformer consists of an encoder and a decoder. The encoder applies a Transformer layer to encode each word into the representation ht: ht = Transformer(xt, x) (6) Inside the Transformer, the representation xt attends to a collection of the other representations x = {x1, x2, · · · , xm}. Then, the decoder produces the hidden state by attending to both the encoder outputs and the previous decoder outputs: vt = Transformer(yt, y<t, h) (7) Finally, the output vector is fed into a word generator with a softmax layer, which is the same as Eq. 5. For the purpose of simplicity, we omit the details of the inner computation of the Transformer layer, and refer the readers to the related work (Vaswani et al., 2017). 2.6 Pseudo Parallel Data Construction The surface realization model is based on the encoder-decoder framework, which requires a large amount of training data. In order to augment the training data, we propose a novel method to construct pseudo parallel data. The surface realization model is used to organize and complete the text given the key facts. Therefore, it is possible to construct the pseudo parallel data by removing the skeleton of the text and reserving only the key facts. In implementation, we label the text with Stanford CoreNLP toolkit2 to assign the POS tag for each word. We reserve the words whose POS tags are among the tag set of {NN, NNS, NNP, NNPS, JJ, JJR, JJS, CD, FW}, and remove the remaining words. In this way, we can construct a large-scale pseudo parallel data to train the surface realization model. 2https://stanfordnlp.github.io/ CoreNLP/index.html 2.7 Denoising Data Augmentation A problem of the two-stage model is that the error may propagate from the first stage to the second stage. A possible solution is to apply beam search to enlarge the searching space at the first stage. However, in our preliminary experiments, when the beam size is small, the diversity of predicted key facts is low, and also does not help to improve the accuracy. When the beam size is big, the decoding speed is slow but the improvement of accuracy is limited. To address this issue, we implement a method of denoising data augmentation to reduce the hurt from error propagation and improve the robustness of our model. In practice, we randomly drop some words from the input of surface realization model, or insert some words from other samples. The dropping simulates the cases when the key fact prediction model fails to recall some cooccurrence, while the inserting simulates the cases when the model predicts some extra facts from the table. By adding the noise, we can regard these data as the adversarial examples, which is able to improve the robustness of the surface realization model. 2.8 Training and Decoding Since the two components of our model are separate, the objective functions of the models are optimized individually. Training of Key Fact Prediction Model: The key fact prediction model, as a sequence labeling model, is trained using the cross entropy loss: L1 = − m X i=1 log p1(li|x) (8) Training of Surface Realization Model: The loss function of the surface realization model can be written as: L2 = − n X i=1 log p2(yi|¯x) (9) where ¯x is a sequence of the selected key facts at Stage 1. The surface realization model is also trained with the pseudo parallel data as described in Section 2.6. The objective function can be written as: L3 = − n X i=1 log p2(¯yi|ˆx) (10) where ¯y is the unlabeled text, and ˆx is the pseudo text paired with ¯y. Decoding: The decoding consists of two steps. At the first step, it predicts the label by the key fact prediction model: ˆlt = arg max lt∈{0,1} p1(lt|x) (11) The word with ˆlt = 1 is reserved, while that with ˆlt = 0 is discarded. Therefore, we can obtain a sub-sequence ¯x after the discarding operation. At the second step, the model emits the text with the surface realization model: ˆyt = arg max yt∈V p2(yt|¯x) (12) where V is the vocabulary size of the model. Therefore, the word sequence {ˆy1, ˆy2, · · · , ˆyN} forms the generated text. 3 Experiments We evaluate our model on a table-to-text generation benchmark. We denote the PIVOT model under the vanilla Seq2Seq framework as PIVOTVanilla, and that under the Transformer framework as PIVOT-Trans. 3.1 Dataset We use WIKIBIO dataset (Lebret et al., 2016) as our benchmark dataset. The dataset contains 728, 321 articles from English Wikipedia, which uses the first sentence of each article as the description of the related infobox. There are an average of 26.1 words in each description, of which 9.5 words also appear in the table. The table contains 53.1 words and 19.7 attributes on average. Following the previous work (Lebret et al., 2016; Liu et al., 2018), we split the dataset into 80% training set, 10% testing set, and 10% validation set. In order to simulate the low resource scenario, we randomly sample 1, 000 parallel sample, and remove the tables from the rest of the training data. 3.2 Evaluation Metrics Following the previous work (Lebret et al., 2016; Wiseman et al., 2018), we use BLEU-4 (Papineni et al., 2002), ROUGE-4 (F measure) (Lin and Hovy, 2003), and NIST-4 (Belz and Reiter, 2006) as the evaluation metrics. 3.3 Implementation Details The vocabulary is limited to the 20, 000 most common words in the training dataset. The batch size is 64 for all models. We implement the early stopping mechanism with a patience that the performance on the validation set does not fall in 4 epochs. We tune the hyper-parameters based on the performance on the validation set. The key fact prediction model is a Bi-LSTM. The dimensions of the hidden units, the word embedding, the attribute embedding, and the position embedding are 500, 400, 50, and 5, respectively. We implement two models as the surface realization models. For the vanilla Seq2Seq model, we set the hidden dimension, the embedding dimension, and the dropout rate (Srivastava et al., 2014) to be 500, 400, and 0.2, respectively. For the Transfomer model, the hidden units of the multihead component and the feed-forward layer are 512 and 2048. The embedding size is 512, the number of heads is 8, and the number of Transformer blocks is 6. We use the Adam (Kingma and Ba, 2014) optimizer to train the models. For the hyperparameters of Adam optimizer, we set the learning rate α = 0.001, two momentum parameters β1 = 0.9 and β2 = 0.999, and ϵ = 1 × 10−8. We clip the gradients (Pascanu et al., 2013) to the maximum norm of 5.0. We half the learning rate when the performance on the validation set does not improve in 3 epochs. 3.4 Baselines We compare our models with two categories of baseline models: the supervised models which exploit only parallel data (Vanilla Seq2Seq, Transformer, Struct-aware), and the semi-supervised models which are trained on both parallel data and unlabelled data (PretrainedMT, SemiMT). The baselines are as follows: • Vanilla Seq2Seq (Sutskever et al., 2014) with the attention mechanism (Bahdanau et al., 2014) is a popular model for natural language generation. • Transformer (Vaswani et al., 2017) is a state-of-the-art model under the encoderdecoder framework, based solely on attention mechanisms. • Struct-aware (Liu et al., 2018) is the stateof-the-art model for table-to-text generation. Model F1 P R PIVOT (Bi-LSTM) 87.92 92.59 83.70 Model BLEU NIST ROUGE Vanilla Seq2Seq 2.14 0.2809 0.47 Structure-S2S 3.27 0.9612 0.71 PretrainedMT 4.35 1.9937 0.91 SemiMT 6.76 3.5017 2.04 PIVOT-Vanilla 20.09 6.5130 18.31 Model BLEU NIST ROUGE Transformer 5.48 1.9873 1.26 PretrainedMT 6.43 2.1019 1.77 SemiMT 9.71 2.7019 3.31 PIVOT-Trans 27.34 6.8763 19.30 Table 1: Results of our model and the baselines. Above is the performance of the key fact prediction component (F1: F1 score, P: precision, R: recall). Middle is the comparison between models under the Vanilla Seq2Seq framework. Below is the models implemented with the transformer framework. It models the inner structure of table with a field-gating mechanism insides the LSTM, and learns the interaction between tables and text with a dual attention mechanism. • PretrainedMT (Skorokhodov et al., 2018) is a semi-supervised method to pretrain the decoder of the sequence-to-sequence model with a language model. • SemiMT (Cheng et al., 2016) is a semisupervised method to jointly train the sequence-to-sequence model with an autoencoder. The supervised models are trained with the same parallel data as our model, while the semisupervised models share the same parallel data and the unlabeled data as ours. 3.5 Results We compare our PIVOT model with the above baseline models. Table 1 summarizes the results of these models. It shows that our PIVOT model achieves 87.92% F1 score, 92.59% precision, and 83.70% recall at the stage of key fact prediction, which provides a good foundation for the stage of surface realization. Based on the selected key facts, our models achieve the scores of 20.09 BLEU, 6.5130 NIST, and 18.31 ROUGE under 0 1000 6000 30000 60000 300000 Parallel Data Size 0 10 20 30 40 BLEU Vanilla Pivot Vanilla (a) Vanilla Seq2Seq v.s PIVOT-Vanilla 0 1000 6000 30000 60000 300000 Parallel Data Size 0 10 20 30 40 BLEU Transformer Pivot Trans (b) Transformer v.s PIVOT-Trans Figure 4: The BLEU measure of our Pivot model and the baselines trained with different parallel data size. the vanilla Seq2Seq framework, and 27.34 BLEU, 6.8763 NIST, and 19.30 ROUGE under the Transformer framework, which significantly outperform all the baseline models in terms of all metrics. Furthermore, it shows that the implementation with the Transformer can obtain higher scores than that with the vanilla Seq2Seq. 3.6 Varying Parallel Data Size We would like to further analyze the performance of our model given different size of parallel size. Therefore, we randomly shuffle the full parallel training set. Then, we extract the first K samples as the parallel data, and modify the remaining data as the unlabeled data by removing the tables. We set K = 1000, 6000, 30000, 60000, 300000, and compare our pivot models with both vanilla Seq2Seq and Transformer. Figure 4 shows the BLEU scores of our models and the baselines. When the parallel data size is small, the pivot model can outperform the vanilla Seq2Seq and Transformer by a large margin. With the increasement of the parallel data, the margin gets narrow because of the upper bound of the model capacity. 1000 6000 30000 60000 300000 Parallel Data Size 80.0 82.5 85.0 87.5 90.0 92.5 95.0 97.5 100.0 F1-score Figure 5: The F1 score of the key fact prediction model trained with different parallel data size. Model BLEU NIST ROUGE Vanilla Seq2Seq 2.14 0.2809 0.47 + Pseudo 10.01 3.0620 6.55 Transformer 6.43 2.1019 1.77 + Pseudo 14.35 4.1763 8.42 w/o Pseudo 11.08 3.6910 4.84 PIVOT-Vanilla 20.09 6.5130 18.31 w/o Pseudo 14.18 4.2686 7.10 PIVOT-Trans 27.34 6.8763 19.30 Table 2: Ablation study on the 1k training set for the effect of pseudo parallel data. Figure 5 shows the curve of the F1 score of the key fact prediction model trained with different parallel data size. Even when the number of annotated data is extremely small, the model can obtain a satisfying F1 score about 88%. In general, the F1 scores between the low and high parallel data sizes are close, which validates the assumption that the key fact prediction model does not rely on a heavy annotated data. 3.7 Effect of Pseudo Parallel Data In order to analyze the effect of pseudo parallel data, we conduct ablation study by adding the data to the baseline models and removing them from our models. Table 2 summarizes the results of the ablation study. Surprisingly, the pseudo parallel data can not only help the pivot model, but also significantly improve vanilla Seq2Seq and Transformer. The reason is that the pseudo parallel data can help the models to improve the ability of surface realization, which these models lack under the condition of limited parallel data. The pivot Model BLEU NIST ROUGE PIVOT-Vanilla 20.09 6.5130 18.31 w/o denosing 18.45 4.8714 11.43 PIVOT-Trans 27.34 6.8763 19.30 w/o denosing 25.72 6.5475 17.95 Table 3: Ablation study on the 1k training set for the effect of the denoising data augmentation. Transformer: a athletics -lrb- nfl-rrb- . SemiMT: gustav dovid -lrb- born 25 august 1945 -rrb- is a former hungarian politician , who served as a member of the united states -lrb- senate -rrb- from president to 1989 . PIVOT-Trans: philippe adnot -lrb- born august 25 , 1945 -rrb- is a french senator , senator , and a senator of the french senate . Reference: philippe adnot -lrb- born 25 august 1945 in rhges -rrb- is a member of the senate of france . Table 4: An example of the generated text by our model and the baselines on 1k training set. models can outperform the baselines with pseudo data, mainly because it breaks up the operation of key fact prediction and surface realization, both of which are explicitly and separately optimized. 3.8 Effect of Denoising Data Augmentation We also want to know the effect of the denoising data augmentation. Therefore, we remove the denoising data augmentation from our model, and compare with the full model. Table 3 shows the results of the ablation study. It shows that the data augmentation brings a significant improvement to the pivot models under both vanilla Seq2Seq and Transformer frameworks, which demonstrates the efficiency of the denoising data augmentation. 3.9 Qualitative Analysis We provide an example to illustrate the improvement of our model more intuitively, as shown in Table 4. Under the low resource setting, the Transformer can not produce a fluent sentence, and also fails to select the proper fact from the table. Thanks to the unlabeled data, the SemiMT model can generate a fluent, human-like description. However, it suffers from the hallucination problem so that it generates some unseen facts, which is not faithful to the source input. Although the PIVOT model has some problem in generating repeating words (such as “senator” in the example), it can select the correct key facts from the table, and produce a fluent description. 4 Related Work This work is mostly related to both table-to-text generation and low resource natural language generation. 4.1 Table-to-text Generation Table-to-text generation is widely applied in many domains. Dubou´e and McKeown (2002) proposed to generate the biography by matching the text with a knowledge base. Barzilay and Lapata (2005) presented an efficient method for automatically learning content selection rules from a corpus and its related database in the sports domain. Liang et al. (2009) introduced a system with a sequence of local decisions for the sportscasting and the weather forecast. Recently, thanks to the success of the neural network models, more work focused on the neural generative models in an endto-end style (Wiseman et al., 2017; Puduppully et al., 2018; Gehrmann et al., 2018; Sha et al., 2018; Bao et al., 2018; Qin et al., 2018). Lebret et al. (2016) constructed a dataset of biographies from Wikipedia, and built a neural model based on the conditional neural language models. Liu et al. (2018) introduced a structure-aware sequence-tosequence architecture to model the inner structure of the tables and the interaction between the tables and the text. Wiseman et al. (2018) focused on the interpretable and controllable generation process, and proposed a neural model using a hidden semi-markov model decoder to address these issues. Nie et al. (2018) attempted to improve the fidelity of neural table-to-text generation by utilizing pre-executed symbolic operations in a sequence-to-sequence model. 4.2 Low Resource Natural Language Generation The topic of low resource learning is one of the recent spotlights in the area of natural language generation (Tilk and Alum¨ae, 2017; Tran and Nguyen, 2018). More work focused on the task of neural machine translation, whose models can generalize to other tasks in natural language generation. Gu et al. (2018) proposed a novel universal machine translation which uses a transfer-learning approach to share lexical and sentence level representations across different languages. Cheng et al. (2016) proposed a semi-supervised approach that jointly train the sequence-to-sequence model with an auto-encoder, which reconstruct the monolingual corpora. More recently, some work explored the unsupervised methods to totally remove the need of parallel data (Lample et al., 2018b,a; Artetxe et al., 2017; Zhang et al., 2018). 5 Conclusions In this work, we focus on the low resource tableto-text generation, where only limited parallel data is available. We separate the generation into two stages, each of which is performed by a model trainable with only a few annotated data. Besides, We propose a method to construct a pseudo parallel dataset for the surface realization model, without the need of any structured table. Experiments show that our proposed model can achieve 27.34 BLEU score on a biography generation dataset with only 1, 000 parallel data. Acknowledgement We thank the anonymous reviewers for their thoughtful comments. This work was supported in part by National Natural Science Foundation of China (No. 61673028). Xu Sun is the corresponding author of this paper. References Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2017. Unsupervised neural machine translation. CoRR, abs/1710.11041. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Jun-Wei Bao, Duyu Tang, Nan Duan, Zhao Yan, Yuanhua Lv, Ming Zhou, and Tiejun Zhao. 2018. Tableto-text: Describing table region with natural language. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5020–5027. Regina Barzilay and Mirella Lapata. 2005. Collective content selection for concept-to-text generation. In HLT/EMNLP 2005, Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, 6-8 October 2005, Vancouver, British Columbia, Canada, pages 331–338. Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In EACL 2006, 11st Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference, April 3-7, 2006, Trento, Italy. Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semisupervised learning for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93– 98. Pablo Ariel Dubou´e and Kathleen R. McKeown. 2002. Content planner construction via evolutionary algorithms and a corpus-based fitness function. In Proceedings of the International Natural Language Generation Conference, Harriman, New York, USA, July 2002, pages 89–96. Sebastian Gehrmann, Falcon Z. Dai, Henry Elder, and Alexander M. Rush. 2018. End-to-end content and plan selection for data-to-text generation. In Proceedings of the 11th International Conference on Natural Language Generation, Tilburg University, The Netherlands, November 5-8, 2018, pages 46–56. Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O. K. Li. 2018. Universal neural machine translation for extremely low resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACLHLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 344–354. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Representations (ICLR). Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP). R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1203–1213. Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 2-7 August 2009, Singapore, pages 91–99. Chin-Yew Lin and Eduard H. Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLTNAACL 2003. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text generation by structure-aware seq2seq learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 4881– 4888. Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. 2017. Knowing when to look: Adaptive attention via a visual sentinel for image captioning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 3242–3250. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, pages 1412– 1421. Feng Nie, Jinpeng Wang, Jin-Ge Yao, Rong Pan, and Chin-Yew Lin. 2018. Operation-guided neural networks for high fidelity data-to-text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 3879–3889. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pages 1310–1318. Ratish Puduppully, Li Dong, and Mirella Lapata. 2018. Data-to-text generation with content selection and planning. CoRR, abs/1809.00582. Guanghui Qin, Jin-Ge Yao, Xuening Wang, Jinpeng Wang, and Chin-Yew Lin. 2018. Learning latent semantic annotations for grounding natural language to structured data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 3761–3771. Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, and Zhifang Sui. 2018. Order-planning neural text generation from structured data. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5414–5421. Ivan Skorokhodov, Anton Rykachevskiy, Dmitry Emelyanenko, Sergey Slotin, and Anton Ponkratov. 2018. Semi-supervised neural machine translation with language models. In Proceedings of the Workshop on Technologies for MT of Low Resource Languages, LoResMT@AMTA 2018, Boston, MA, USA, March 21, 2018, pages 37–44. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, pages 3104–3112. Ottokar Tilk and Tanel Alum¨ae. 2017. Low-resource neural headline generation. In Proceedings of the Workshop on New Frontiers in Summarization, NFiS@EMNLP 2017, Copenhagen, Denmark, September 7, 2017, pages 20–26. Van-Khanh Tran and Le-Minh Nguyen. 2018. Dual latent variable model for low-resource natural language generation in dialogue systems. In Proceedings of the 22nd Conference on Computational Natural Language Learning, CoNLL 2018, Brussels, Belgium, October 31 - November 1, 2018, pages 21– 30. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6000–6010. Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2253–2263. Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2018. Learning neural templates for text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 3174–3187. Pengcheng Yang, Xu Sun, Wei Li, Shuming Ma, Wei Wu, and Houfeng Wang. 2018. SGM: sequence generation model for multi-label classification. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 3915–3926. Yi Zhang, Jingjing Xu, Pengcheng Yang, and Xu Sun. 2018. Learning sentiment memories for sentiment modification without parallel data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1103–1108.
2019
197
Unsupervised Neural Text Simplification Sai Surya† Abhijit Mishra‡ Anirban Laha‡ Parag Jain‡ Karthik Sankaranarayanan‡ †IIT Kharagpur, India ‡IBM Research [email protected] {abhijimi,anirlaha,pajain34,kartsank}@in.ibm.com Abstract The paper presents a first attempt towards unsupervised neural text simplification that relies only on unlabeled text corpora. The core framework is composed of a shared encoder and a pair of attentional-decoders, crucially assisted by discrimination-based losses and denoising. The framework is trained using unlabeled text collected from en-Wikipedia dump. Our analysis (both quantitative and qualitative involving human evaluators) on public test data shows that the proposed model can perform text-simplification at both lexical and syntactic levels, competitive to existing supervised methods. It also outperforms viable unsupervised baselines. Adding a few labeled pairs helps improve the performance further. 1 Introduction Text Simplification (TS) deals with transforming the original text into simplified variants to increase its readability and understandability. TS is an important task in computational linguistics, and has numerous use-cases in fields of education technology, targeted content creation, language learning, where producing variants of the text with varying degree of simplicity is desired. TS systems are typically designed to simplify from two different linguistic aspects: (a) Lexical aspect, by replacing complex words in the input with simpler synonyms (Devlin, 1998; Candido Jr et al., 2009; Yatskar et al., 2010; Biran et al., 2011; Glavaˇs and ˇStajner, 2015), and (b) Syntactic aspect, by altering the inherent hierarchical structure of the sentences (Chandrasekar and Srinivas, 1997; Canning and Tait, 1999; Siddharthan, 2006; Filippova and Strube, 2008; Brouwers et al., 2014). From the perspective of sentence construction, sentence simplification can be thought to be a form of text-transformation that involves three major types of operations such as (a) splitting (Siddharthan, 2006; Petersen and Ostendorf, 2007; Narayan and Gardent, 2014) (b) deletion/compression (Knight and Marcu, 2002; Clarke and Lapata, 2006; Filippova and Strube, 2008; Rush et al., 2015; Filippova et al., 2015), and (c) paraphrasing (Specia, 2010; Coster and Kauchak, 2011; Wubben et al., 2012; Wang et al., 2016; Nisioi et al., 2017). Most of the current TS systems require largescale parallel corpora for training (except for systems like Glavaˇs and ˇStajner (2015) that performs only lexical-simplification), which is a major impediment in scaling to newer languages, use-cases, domains and output styles for which such largescale parallel data do not exist. In fact, one of the popular corpus for TS in English language, i.e., the Wikipedia-SimpleWikipedia aligned dataset has been prone to noise (mis-aligned instances) and inadequacy (i.e., instances having non-simplified targets) (Xu et al., 2015; ˇStajner et al., 2015), leading to noisy supervised models (Wubben et al., 2012). While creation of better datasets (such as, Newsela by Xu et al. (2015)) can always help, we explore the unsupervised learning paradigm which can potentially work with unlabeled datasets that are cheaper and easier to obtain. At the heart of the TS problem is the need for preservation of language semantics with the goal of improving readability. From a neural-learning perspective, this entails a specially designed autoencoder, which not only is capable of reconstructing the original input but also can additionally introduce variations so that the auto-encoded output is a simplified version of the input. Intuitively, both of these can be learned by looking at the structure and language patterns of a large amount of non-aligned complex and simple sentences (which are much cheaper to obtain compared to aligned parallel data). These motivations form the basis of our work. Our approach relies only on two unlabeled text corpora - one representing relatively simpler sentences than the other (which we call complex). The crux of the (unsupervised) auto-encoding framework is a shared encoder and a pair of attention-based decoders (one for each type of corpus). The encoder attempts to produce semanticspreserving representations which can be acted upon by the respective decoders (simple or complex) to generate the appropriate text output they are designed for. The framework is crucially supported by two kinds of losses: (1) adversarial loss - to distinguish between the real or fake attention context vectors for the simple decoder, and (2) diversification loss - to distinguish between attention context vectors of the simple decoder and the complex decoder. The first loss ensures that only the aspects of semantics that are necessary for simplification are passed to the simple decoder in the form of the attention context vectors. The second loss, on the other hand, facilitates passing different semantic aspects to the different decoders through their respective context vectors. Also we employ denoising in the auto-encoding setup for enabling syntactic transformations. The framework is trained using unlabeled text collected from Wikipedia (complex) and Simple Wikipedia (simple). It attempts to perform simplification both lexically and syntactically unlike prevalent systems which mostly target them separately. We demonstrate the competitiveness of our unsupervised framework alongside supervised skylines through both automatic evaluation metrics and human evaluation studies. We also outperform another unsupervised baseline (Artetxe et al., 2018b), first proposed for neural machine translation. Further, we demonstrate that by leveraging a small amount of labeled parallel data, performance can be improved further. Our code and a new dataset containing partitioned unlabeled sets of simple and complex sentences is publicly available1. 2 Related Work Text Simplification has often been discussed from psychological and linguistic standpoints (L’Allier, 1980; McNamara et al., 1996; Linderholm et al., 2000). A heuristic-based system was first introduced by Chandrasekar and Srinivas (1997) which induces rules for simplification automatically extracted from annotated corpora. Canning and Tait (1999) proposed a modular system that uses NLP tools such as morphological analyzer, POS tagger 1https://github.com/subramanyamdvss/UnsupNTS plus heuristics to simplify the text both lexically and syntactically. Most of these systems (Siddharthan, 2014) are separately targeted towards lexical and syntactic simplification and are limited to splitting and/or truncating sentences. For paraphrasing based simplification, data-driven approaches were proposed like phrase-based SMT (Specia, 2010; ˇStajner et al., 2015) or their variants (Coster and Kauchak, 2011; Xu et al., 2016), that combine heuristic and optimization strategies for better TS. Recently proposed TS systems are based on neural seq2seq architecture (Bahdanau et al., 2014) which is modified for TS specific operations (Wang et al., 2016; Nisioi et al., 2017). While these systems produce state of the art results on the popular Wikipedia dataset (Coster and Kauchak, 2011), they may not be generalizable because of the noise and bias in the dataset (Xu et al., 2015) and overfitting. Towards this, ˇStajner and Nisioi (2018) showed that improved datasets and minor model changes (such as using reduced vocabulary and enabling copy mechanism) help obtain reasonable performance for both in-domain and cross-domain TS. In the unsupervised paradigm, Paetzold and Specia (2016) proposed an unsupervised lexical simplification technique that replaces complex words in the input with simpler synonyms, which are extracted and disambiguated using word embeddings. However, this work, unlike ours only addresses lexical simplification and cannot be trivially extended for other forms of simplification such as splitting and rephrasing. Other works related to style transfer (Zhang et al., 2018; Shen et al., 2017; Xu et al., 2018) typically look into the problem of sentiment transformation and are not motivated by the linguistic aspects of TS, and hence not comparable to our work. As far as we know, ours is a first of its kind end-to-end solution for unsupervised TS. At this point, though supervised solutions perform better than unsupervised ones, we believe unsupervised techniques should be further explored since they hold greater potential with regards to scalability to various tasks. 3 Model Description Our system is built based on the encode-attenddecode style architecture (Bahdanau et al., 2014) with both algorithmic and architectural changes applied to the standard model. An input sequence of word embeddings X = {x1, x2, . . . , xn} (ob𝐴"# 𝐴"$ 𝐴"% 𝐴"& 𝑦# 𝑦$ 𝑦% 𝑦& 𝐴(# 𝐴($ 𝐴(% 𝐴(& 𝑦# 𝑦$ 𝑦% 𝑦& 𝑥# 𝑥$ 𝑥% 𝑥* 𝑬 𝑮𝒅 𝑮𝒔 Decoder Decoder Encoder Discriminator Classifier 𝓛𝒓𝒆𝒄(𝜽𝑮𝒅,𝜽𝑬) , 𝓛𝒅𝒆𝒏𝒐𝒊(𝜽𝑮𝒅,𝜽𝑬) 𝓛𝒂𝒅𝒗,𝑫(𝜽𝑫) 𝓛𝒅𝒊𝒗,𝑪(𝜽𝑪) 𝓛𝒓𝒆𝒄(𝜽𝑮𝒔,𝜽𝑬) , 𝓛𝒅𝒆𝒏𝒐𝒊(𝜽𝑮𝒔,𝜽𝑬) 𝓛𝒅𝒊𝒗,𝑮𝒔(𝜽𝑬,𝜽𝑮𝒔) 𝓛𝒂𝒅𝒗,𝑮𝒔(𝜽𝑬,𝜽𝑮𝒔) Figure 1: System Architecture. Input sentences of any domain is encoded by E, and decoded by Gs, Gd. Discriminator D and classifier C tune the attention vectors for simplification. L represents loss functions. The figure only reveals one layer in E, Gs and Gd for simplicity. However, the model uses two layers of GRUs (Section 3). tained after a standard look up operation on the embedding matrix), is passed through a shared encoder (E), the output representation from which is fed to two decoders (Gs, Gd) with attention mechanism. Gs is meant to generate a simple sentence from the encoded representation, whereas Gd generates a complex sentence. A discriminator (D) and a classifier (C) are also employed adversarially to distinguish between the attention context vectors computed with respect to the two decoders. Figure 1 is illustrates our system. We describe the components below. 3.1 Encode-Attend-Decode Model Encoder E uses two layers of bi-directional GRUs (Cho et al., 2014b), and decoders Gs, Gd have two layers of GRUs each. E extracts the hidden representations from an input sentence. The decoders output sentences, sequentially one word at a time. Each decoder-step involves using global attention to create a context-vector (hidden representations weighted by attention weights) as an input for the next decoder-step. The attention mechanism enables the decoders to focus on different parts of the input sentence. For the input sentence X with n words, the encoder produces n hidden representations, H = {h1, h2, . . . , hn}. The context vector extracted from X by a decoder G for time-step t is represented as, At(X) = n X i=1 aithi (1) where, ait denotes attention weight for the hidden representation at the ith input position with respect to decoder-step t. As there are two decoders, Ast(X) and Adt(X) denote the context vectors computed from decoders Gs and Gd respectively for time-steps t ∈{1 . . . m}, m denoting the total number of decoding steps performed2. The matrices As(X) and Ad(X) represent the sequence of respective context vectors from all time-steps. 3.2 Discriminator and Classifier A discriminator D is employed to influence the way the decoder Gs will attend to the hidden representations, which has to be different for different types of inputs to the shared encoder E (simple vs complex). The input to D is the context vector sequence matrix As pertaining to Gs, and it produces a binary output, {1, 0}, 1 indicating the fact that the context vector sequence is close to a typical context vector sequence extracted from simple sentences seen in the dataset. Gs and D are indulged in an adversarial interplay through an adversarial loss function (see Section 4.2), analogous to GANs (Goodfellow et al., 2014), where the generator and discriminators, converge to a point where the distribution of the generations eventually resembles the distribution of the genuine samples. In our case, adversarial loss tunes the context vector sequence from a complex sentence by Gs to ultimately resemble the context vector sequence of simple sentences in the corpora. This ensures that the resultant context vector for Gs captures only the necessary language signals to decode a simple sentence. A classifier (C) is introduced for diversification to ensure that the way decoder Gs attends to the hidden representations remains different from 2For a particular X, m can differ for the two decoders. Gd. It helps distinguish between simple and complex context vector sequences with respect to Gs and Gd respectively. The classifier diversifies the context vectors given as input to the different decoders. Intuitively, different linguistic signals are needed to decode a complex sentence vis-´a-vis a simple one. Refer Section 4.3 for more details. Both D and C use a CNN-based classifier analogous to Kim (2014). All layers are shared between D and C except the fully-connected layer preceeding the softmax function. 3.3 Special Purpose Word-Embeddings Pre-trained word embeddings are often seen to have positive impact on sequence-to-sequence frameworks (Cho et al., 2014a; Qi et al., 2018). However, traditional embeddings are not good at capturing relations like synonymy (Tissier et al., 2017), which are essential for simplification. For this, our word-embeddings are trained using the Dict2Vec framework3. Dict2Vec fine-tunes the embeddings through the help of an external lexicon containing weak and strong synonymy relations. The system is trained on our whole unlabeled datasets and with seed synonymy dictionaries provided by Tissier et al. (2017). Our encoder and decoders share the same word embeddings. Moreover, the embeddings at the input side are kept static but the decoder embeddings are updated as training progresses. Details about hyperparameters are given in Section 5.2. 4 Training Procedure Let S and D be sets of simple and complex sentences respectively from large scale unlabeled repositories of simple and complex sentences. Let Xs denote a sentence sampled from the set of simple sentences S and Xd be a sentence sampled from the set of complex sentences D. Let θE denote the parameters of E and θGs , θGd denote the parameters of Gs and Gd respectively. Also, θC and θD are the parameters of the discriminator and the classifier modules. Training the model involves optimization of the above parameters with respect to the following losses and denoising, which are explained below. 4.1 Reconstruction Loss Reconstruction Loss is imposed on both E −Gs and E −Gd paths. E −Gs is trained to recon3https://github.com/tca19/dict2vec struct sentences from S and E −Gd is trained to reconstruct sentences from D. Let PE−Gs(X) and PE−Gd(X) denote the reconstruction probabilities of an input sentence X estimated by the E −Gs and E −Gd models respectively. Reconstruction loss for E −Gs and E −Gd , denoted by Lrec is computed as follows. Lrec(θE, θGs, θGd) = −EXs∼S[log PE−Gs(Xs)]− EXd∼D[log PE−Gd(Xd)] (2) 4.2 Adversarial Loss Adversarial Loss is imposed upon the context vectors for Gs. The idea is that, context vectors extracted even for a complex input sentence by Gs should resemble the context vectors from a simple input sentence. The discriminator D is trained to distinguish the fake (complex) context vectors from the real (simple) context vectors. E −Gs is trained to perplex the discriminator D, and eventually, at convergence, learns to produce real-like (simple) context vectors from complex input sentences. In practice, we observe that adversarial loss indeed assists E −Gs in simplification by encouraging sentence shortening. Let As(.) be a sequence of context vectors as defined in Section 3.1. Adversarial losses for E −Gs , denoted by Ladv,Gs and for discriminator D, denoted by Ladv,D are as follows. Ladv,D(θD) = −EXs∼S[log (D(As(Xs)))]− EXd∼D[log (1 −D(As(Xd))] (3) Ladv,Gs(θE, θGs) = −EXd∼D[log (D(As(Xd)))] (4) 4.3 Diversification Loss Diversification Loss is imposed by the classifier C on context vectors extracted by Gd from complex input sentences in contrast with context vectors extracted by Gs from simple input sentences. This helps E −Gs to learn to generate simple context vectors distinguishable from complex context vectors. Let As(.) and Ad(.) be sequence of context vectors as defined in Section 3.1. Losses for classifier C, denoted by Ldiv,C and for model E −Gs denoted by Ldiv,Gs are computed as follows. Ldiv,C(θC) = −EXs∼S[log (C(As(Xs)))]− EXd∼D[log (1 −C(Ad(Xd)))] (5) Ldiv,Gs(θE, θGs) = −EXd∼D[log (C(As(Xd)))] (6) Algorithm 1 Unsupervised simplification algorithm using denoising, reconstruction, adversarial and diversification losses. Input: simple dataset S, complex dataset D. Initialization phase: repeat Update θE, θGs, θGd using Ldenoi Update θE, θGs, θGd using Lrec Update θD, θC using Ladv,D Ldiv,C until specified number of steps are completed Adversarial phase: repeat Update θE, θGs, θGd using Ldenoi Update θE, θGs, θGd using Ladv,Gs, Ldiv,Gs, Lrec Update θD, θC using Ladv,D, Ldiv,C until specified number of steps are completed 4.4 Denoising Denoising has proven to be helpful to learn syntactic / structural transformation from the source side to the target side (Artetxe et al., 2018b). Syntactic transformation often requires reordering the input, which the denoising procedure aims to capture. Denoising involves arbitrarily reordering the inputs and reconstructing the original (unperturbed) input from such reordered inputs. In our implementation, the source sentence is reordered by swapping bigrams in the input sentences. The following loss function are used in denoising. Let PE−Gs(X|noise(X)) and PE−Gd(X|noise(X)) denote the probabilities that a perturbed input X can be reconstructed by E −Gs and E −Gd respectively. Denoising loss for models E −Gs and E −Gd , denoted by Ldenoi(θE, θGs, θGd) is computed as follows. Ldenoi = −EXs∼S[log PE−Gs(Xs|noise(Xs))]− EXd∼D[log PE−Gd(Xd|noise(Xd))] (7) Figure 1 depicts the overall architecture and the losses described above; the training procedure is described in Algorithm 1. The initialization phase involves training the E −Gs, E −Gd using the reconstruction and denoising losses only. Next, training of D and C happens using the respective adversarial or diversification losses. These losses are not used to update the decoders at this point. This gives the discriminator, classifier and decoders time to learn independent of each other. In the adversarial phase, adversarial and diversification losses are introduced alongside denoising and reconstruction losses for fine-tuning the encoder and decoders. Algorithm 1 is intended to produce the following results: i) E −Gs should simplify its input (irrespective of whether it is simple or complex), and ii) E −Gd should act as an auto-encoder in complex sentence domain. The discriminator and classifier enables preserving the appropriate aspects of semantics necessary for each of these pathways through proper modulation of the attention context vectors. A key requirement for a model like ours is that the dataset used has to be partitioned into two sets, containing relatively simple and complex sentences. The rationale behind having two decoders is that while Gs will try to introduce simplified constructs (may be at the expense of loss of semantics), Gd will help preserve the semantics. The idea behind using the discriminator and classifier is to retain signals related to language simplicity from which Gs will construct simplified sentences. Finally, denoising will help tackle nuances related to syntactic transfer from complex to simple direction. We remind the readers that, TS, unlike machine translation, needs complex syntactic operations such as sentence splitting, rephrasing and paraphrasing, which can not be tackled by the losses and denoising alone. Employing additional explicit mechanisms to handle these in the pipeline is out of the scope of this paper since we seek a prima-facie judgement of our architecture based on how much simplification knowledge can be gained just from the data. 4.5 Training with Minimal Supervision Our system, by design, is highly data-driven, and like any other sequence-to-sequence learning based system, can also leverage labeled data. We propose a semi-supervised variant of our system that could gain additional knowledge of simplification through the help of a small amount of labeled data (in the order of a few thousands). The system undergoes training following steps similar to Algorithm 1, except that it adds another step of optimizing the cross entropy loss for both the E −Gs and E −Gd pathways by using the reference texts available in the labeled dataset. This step is carried out in the adversarial phase along with other steps (See Algorithm 2). The cross-entropy loss is imposed on both E −Gs and E −Gd paths using parallel dataset (details mentioned in Section 5.1) denoted by ∆= (Sp, Dp). For a given parallel simplification sentence pair (Xs, Xd), let PE−Gs(Xs|Xd) and PE−Gd(Xd|Xs) denote the probabilities that Xs is produced from Xd by the E −Gs and the reverse is produced by the E −Gd respectively. Cross-Entropy loss for E −Gs and E −Gd denoted by Lcross(θE, θGs, θGd) is computed as follows: Lcross = −E(Xs,Xd)∼∆[log PE−Gs(Xs|Xd)]− E(Xs,Xd)∼∆[log PE−Gd(Xd|Xs)] (8) Algorithm 2 Semi-supervised simplification algorithm using denoising, reconstruction, adversarial and diversification losses followed by crossentropy loss using parallel data. Input: simple dataset S, complex dataset D, parallel dataset ∆= (Sp, Dp) Initialization phase: repeat Update θE, θGs, θGd using Ldenoi Update θE, θGs, θGd using Lrec Update θD, θC using Ladv,D Ldiv,C until specified number of steps are completed Adversarial phase: repeat Update θE, θGs, θGd using Ldenoi Update θE, θGs, θGd using Ladv,Gs, Ldiv,Gs, Lrec Update θD, θC using Ladv,D, Ldiv,C Update θE, θGs using Lcross Update θE, θGd using Lcross until specified number of steps are completed 5 Experiment Setup In this section we describe the dataset, architectural choices, and model hyperparameters. The implementation of the experimental setup is publicly available4. 5.1 Dataset For training our system, we created an unlabeled dataset of simple and complex sentences by partitioning the standard en-wikipedia dump. Since partitioning requires a metric for measuring text simpleness we categorize sentences based on their 4https://github.com/subramanyamdvss/UnsupNTS Category #Sents Avg. Avg. FEWords FE Range Simple 720k 18.23 76.67 74.9-79.16 Complex 720k 35.03 7.26 5.66-9.93 Table 1: Statistics showing number of sentences, average words per sentence, and average FE score, FE score limits for complex and simple datasets used for training. readability scores. For this we use the Flesch Readability Ease (henceforth abbreviated as FE) (Flesch, 1948). Sentences with lower FE values (up to 10) are categorized as complex and sentences with FE values greater than 70 are categorized as simple5. The FE bounds are decided through trial and error through manual inspection of the categorized sentences. Table 1 shows dataset statistics. Even though the dataset was created with some level of human mediation, the manual effort is insignificant compared to that needed to create a parallel corpus. To train the system with minimal supervision (Section 4.5), we extract 10, 000 pairs of sentences from various datasets such as WikipediaSimpleWikipedia dataset introduced in Hwang et al. (2015) and the Split-Rephrase dataset by Narayan et al. (2017)6. The WikipediaSimpleWikipedia was filtered following Nisioi et al. (2017) and 4000 examples were randomly picked from the filtered set. From the SplitRephrase dataset, examples containing one compound/complex sentence at the source side and two simple sentences at the target side were selected and 6000 examples were randomly picked from the selected set. The Split-Rephrase dataset is used to promote sentence splitting in the proposed system. To select and evaluate our models, we use the test and development sets7 released by (Xu et al., 2016). The test set (359 sentences) and development set (2000 sentences) have 8 simplified reference sentences for each source sentence. 5.2 Hyperparameter Settings For all the variants, we use a hidden state of size 600 and word-embedding size of 300. Classifier C 5FE has its shortcomings to fully judge simpleness, but we nevertheless employ it in the absence of stronger metrics 6https://github.com/shashiongithub/Split-and-Rephrase 7We acknowledge that other recent datasets such as Newsela could have been used for development and evaluation. We could not get access to the dataset unfortunately. and discriminator D use convolutional layers with filters sizes from 1 to 5. 128 filters of each size are used in the CNN-layers. Other training related hyper parameters include learning rate of 0.00012 for θE, θGs, θGd , 0.0005 for θD, θC and batch size of 36. For learning the word-embedding using Dict2Vec training, the window size is set to 5. Our experiments used at most 13 GB of GPU memory. The Initialization phase and Adversarial phase took 6000 and 8000 steps in batches respectively for both UNTS and UNTS+10K systems. 5.3 Evaluation Metrics For automatic evaluation of our system on the test data, we used four metrics, (a) SARI (b) BLEU (c) FE Difference (d) Word Difference, which are briefly explained below. SARI (Xu et al., 2016) is an automatic evaluation metric designed to measure the simpleness of the generated sentences. SARI requires access to source, predictions and references for evaluation. Computing SARI involves penalizing the n-gram additions to source which are inconsistent with the references. Similarly, deletions and keep operations are penalized. The overall score is a balanced sum of all the penalties. BLEU (Papineni et al., 2002), a popular metric to evaluate generations and translations is used to measure the correctness of the generations by measuring overlaps between the generated sentences and (multiple) references. We also compute the average FE score difference between predictions and source in our evaluations. FE-difference measures whether the changes made by the model increase the readability ease of the generated sentence. Word Difference is the average difference between number of words in the source sentence and generation. It is a simple and approximate metric proposed to detect if sentence shortening is occurring or not. Generations with lesser number of changes can still have high SARI and BLEU. Models with such generations can be ruled out by imposing a threshold on the word-diff metric. Models with high word-diff, SARI and BLEU are picked during model-selection (with validation data). Model selection also involved manually examining the quality and relevance of generations. We carry out a qualitative analysis of our system through human evaluation. For this the first 50 test samples were selected from the test data. Output of the seven systems reported in Table 2 along with the sources are presented to two native English speakers who would provide two ratings for each output: (a) Simpleness, a binary score [0-1] indicating whether the output is a simplified version of the input or not, (b) Grammaticality of the output in the range of [1-5], in the increasing order of fluency (c) Relatedness score in the range of [15] showing if the overall semantics of the input is preserved in the output or not. 5.4 Model Variants Using our design, we propose two different variants for evaluation: (i) Unsupervised Neural TS (UNTS) with SARI as the criteria for model selection, (ii) UNTS with minimal supervision using 10000 labelled examples (UNTS+10K). Models selected using other selection criteria such as BLEU resulted in similar and/or reduced performance (details skipped for brevity). We carried out the following basic postprocessing steps on the generated outputs. The OOV(out of vocabulary) words in the generations are replaced by the source words with high attention weights. Words repeated consecutively in the generated sentences are merged. 5.5 Systems for Comparison In the absence of any other direct baseline for end-to-end TS, we consider the following unsupervised baselines. We consider the unsupervised NMT framework proposed by (Artetxe et al., 2018b) as a baseline. It uses techniques such as backtranslation and denoising techniques to synthesize more training examples. To use this framework, we treated the set of simple and complex sentences as two different languages. Same model configuration as reported by Artetxe et al. (2018b) is used. We use the term UNMT for this system. Similar to the UNMT system, we also consider unsupervised statistical machine translation (termed as USMT) proposed by Artetxe et al. (2018a), with default parameter setting. Another system, based on the cross alignment technique proposed by Shen et al. (2017) is also used for comparison. The system is originally proposed for the task of sentiment translation. We term this system as ST. We also compare our approach with existing supervised and unsupervised lexical simplifications like LIGHTLS (Glavaˇs and ˇStajner, 2015), Neural Text Simplification or NTS (Nisioi et al., 2017), Syntax based Machine Translation or SBMT (Xu System FE-diff SARI BLEU Word-diff UNTS+10K 10.45 35.29 76.13 2.38 UNTS 11.15 33.8 74.24 3.55 UNMT 6.60 33.72 70.84 0.74 USMT 13.84 32.11 87.36 -0.01 ST 54.38 14.97 0.73 5.61 NTS 5.37 36.1 79.38 2.73 SBMT 17.68 38.59 73.62 -0.84 PBSMT 9.14 34.07 67.79 2.26 LIGHTLS 3.01 34.96 83.54 -0.02 Table 2: Comparison of evaluation metrics for proposed systems (UNTS), unsupervised baseline (UNMT,USMT, and ST) and existing supervised and the unsupervised lexical simplification system LIGHTLS. System Simpleness Fluency Relatedness UNTS+10K 57% 4.13 3.93 UNTS 47% 3.86 3.73 UNMT 40% 3.8 4.06 NTS 49% 4.13 3.26 SBMT 53% 4.26 4.06 PBSMT 53% 3.8 3.93 LIGHTLS 6% 4.2 3.33 Table 3: Average human evaluation scores for simpleness and grammatical correctness (fluency) and semantic relatedness between the output and input. et al., 2016), and Phrase-based SMT simplification or PBSMT (Wubben et al., 2012). All the systems are trained using the Wikipedia-SimpleWikipedia dataset (Hwang et al., 2015). The test set is same for all of these and our models. 6 Results Table 2 shows evaluation results of our proposed approaches along with existing supervised and unsupervised alternatives. We observe that unsupervised baselines such as UNMT and USMT often, after attaining convergence, recreates sentences similar to the inputs. This explains why they achieve higher BLEU and reduced worddifference scores. The ST system did not converge for our dataset after significant number of epochs which affected the performance metrics. The system often produces short sentences which are simple but do not retain important phrases. Other supervised systems such as SBMT and NTS achieve better content reduction as shown through SARI, BLEU and FE-diff scores; this is expected. However, it is still a good sign that the scores for the unsupervised system UNTS are not far from the supervised skylines. The higher word-diff scores for the unsupervised system also indicate that it is able to perform content reduction (a form of syntactic simplification), which is crucial to TS. This is unlike the existing unsupervised LIGHTLS system which often replaces nouns with related non-synonymous nouns; sometimes increasing the complexity and affecting the meaning. Finally, it is worth noting that aiding the system with a very small amount of labeled data can also benefit our unsupervised pipeline, as suggested by the scores for the UNTS+10K system. In Table 3, the first column represents what percentage of output form is a simplified version of the input. The second and third columns present the average fluency (grammaticality) scores given by human evaluators and semantic relatedness with input scored through automatic means. Almost all systems are able to produce sentences that are somewhat grammatically correct and retain phrases from input. Supervised systems like PBSMT, as expected, simplify the sentences to the maximum extent. However, our unsupervised variants have scores competitive to the supervised skylines, which is a positive sign. Table 4 shows an anecdotal example, containing outputs from the seven systems. As can be seen, the quality of output from our unsupervised variants, is far from that of the reference output. However, the attempts towards performing lexical simplification (by replacing the word “Neverthless” with “However”) and simplification of multi-word phrases (“Tagore emulated numerous styles” getting translated to “Tagore replaced many styles”) are quite visible and encouraging. Table 5 presents a few examples demonstrating the capabilities of our system in performing simplifications at lexical and syntactic level. We do observe that such operations are carried out only for a few instances in our test data. Also, our analysis in Appendix B indicate that the system can improve over time with addition of more data. Results for ablations on adversarial and diversification loss are also included in Appendix A. 7 Conclusion In this paper, we made a novel attempt towards unsupervised text simplification. We gathered unlabeled corpora containing simple and complex sentences and used them to train our system that is System Output Input Nevertheless , Tagore emulated numerous styles , including craftwork from northern New Ireland , Haida carvings from the west coast of Canada ( British Columbia ) , and woodcuts by Max Pechstein . Reference Nevertheless , Tagore copied many styles , such as crafts from northern New Ireland , Haida carvings from the west coast of Canada and wood carvings by Max Pechstein . UNTS+10K Nevertheless , Tagore replaced many styles , including craftwork from northern New Ireland , Haida carved from the west coast of Canada ( British Columbia ) . UNTS However , Tagore notably numerous styles , including craftwork from northern New Ireland , Haida carved from the west coast of Canada ( British ) . UNMT However , Tagore featured numerous styles including craftwork from northern New Ireland , Haida from the west coast of Canada ( British Columbia ) max by Max Pechstein . USMT Nevertheless , Mgr emulated numerous styles , including craftwork from northern New Ireland , Haida carvings from the west coast of Canada (British Columbia) , and etchings by Max Pechstein . NTS However , Tagore wrote many styles , including craftwork from northern New Ireland , Haida carvings from the west coast of Canada ( British Columbia ) . SBMT However , Tagore emulated many styles , such as craftwork in north New Ireland , Haida prints from the west coast of Canada ( British Columbia ) , and woodcuts by Max Pechstein . PBSMT Nevertheless , he copied many styles , from new craftwork , Haida carvings from the west coast of Canada in British Columbia and woodcuts by Max Pechstein . LIGHTLS However , Tagore imitated numerous styles , including craftwork from northern New Ireland , Haida sculptures from the west coast of Canada ( British Columbia ) , and engravings by Max Pechstein . Table 4: Example predictions from different systems. Type of Simplification Source Prediction Splitting Calvin Baker is an American novelist . Calvin Baker is an American . American Baker is a birthplace . Sentence Shortening During an interview , Edward Gorey mentioned that Bawden was one of his favorite artists , lamenting the fact that not many people remembered or knew about this fine artist . During an interview , Edward Gorey mentioned that Bawden was one of his favorite artists . Lexical Replacement In architectural decoration Small pieces of colored and iridescent shell have been used to create mosaics and inlays , which have been used to decorate walls , furniture and boxes . In impressive decoration Small pieces of colored and reddish shell have been used to create statues and inlays , which have been used to decorate walls , furniture and boxes . Table 5: Examples showing different types of simplifications performed by the best model UNTS+10K. based on a shared encoder and two decoders. A novel training scheme is proposed which allows the model to perform content reduction and lexical simplification simultaneously through our proposed losses and denoising. Experiments were conducted for multiple variants of our system as well as known unsupervised baselines and supervised systems. Qualitative and quantitative analysis of the outputs for a publicly available test data demonstrate that our models, though unsupervised, can perform better than or competitive to these baselines. In future, we would like to improve the system further by incorporating better architectural designs and training schemes to tackle complex simplification operations. 8 Acknowledgements We thank researchers at IBM IRL, IIT Kharagpur, Vishal Gupta and Dr. Sudeshna Sarkar for helpful discussions in this project. References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632–3642. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised neural machine translation. In Proceedings of the Sixth International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473. Or Biran, Samuel Brody, and No´emie Elhadad. 2011. Putting it simply: a context-aware approach to lexical simplification. In ACL, pages 496–501. Association for Computational Linguistics. Laetitia Brouwers, Delphine Bernhard, Anne-Laure Ligozat, and Thomas Franc¸ois. 2014. Syntactic sentence simplification for french. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR)@ EACL 2014, pages 47–56. Arnaldo Candido Jr, Erick Maziero, Caroline Gasperin, Thiago AS Pardo, Lucia Specia, and Sandra M Aluisio. 2009. Supporting the adaptation of texts for poor literacy readers: a text simplification editor for brazilian portuguese. In Innovative Use of NLP for Building Educational Applications, pages 34–42. Association for Computational Linguistics. Yvonne Canning and John Tait. 1999. Syntactic simplification of newspaper text for aphasic readers. In Customised Information Delivery, pages 6–11. Raman Chandrasekar and Bangalore Srinivas. 1997. Automatic induction of rules for text simplification1. Knowledge-Based Systems, 10(3):183–190. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734. James Clarke and Mirella Lapata. 2006. Models for sentence compression: A comparison across domains, training requirements and evaluation measures. In COLING, pages 377–384. Association for Computational Linguistics. William Coster and David Kauchak. 2011. Simple english wikipedia: a new text simplification task. In ACL, pages 665–669. Association for Computational Linguistics. Siobhan Devlin. 1998. The use of a psycholinguistic database in the simplification of text for aphasic readers. Linguistic databases. Katja Filippova, Enrique Alfonseca, Carlos A Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 360–368. Katja Filippova and Michael Strube. 2008. Dependency tree based sentence compression. In INLG, pages 25–32. Association for Computational Linguistics. Rudolph Flesch. 1948. A new readability yardstick. Journal of applied psychology, 32(3):221. Goran Glavaˇs and Sanja ˇStajner. 2015. Simplifying lexical simplification: do we need simplified corpora? In ACL, volume 2, pages 63–68. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680. William Hwang, Hannaneh Hajishirzi, Mari Ostendorf, and Wei Wu. 2015. Aligning sentences from standard wikipedia to simple wikipedia. In NAACLHLT, pages 211–217. Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv:1408.5882. Kevin Knight and Daniel Marcu. 2002. Summarization beyond sentence extraction: A probabilistic approach to sentence compression. Artificial Intelligence, 139(1):91–107. J. L’Allier. 1980. An evaluation study of a computerbased lesson that adjusts read- ing level by monitoring on task reader characteristics. Ph.D. Thesis. Tracy Linderholm, Michelle Gaddy Everson, Paul Van Den Broek, Maureen Mischinski, Alex Crittenden, and Jay Samuels. 2000. Effects of causal text revisions on more-and less-skilled readers’ comprehension of easy and difficult texts. Cognition and Instruction, 18(4):525–556. Danielle S McNamara, Eileen Kintsch, Nancy Butler Songer, and Walter Kintsch. 1996. Are good texts always better? interactions of text coherence, background knowledge, and levels of understanding in learning from text. Cognition and instruction, 14(1):1–43. Shashi Narayan and Claire Gardent. 2014. Hybrid simplification using deep semantics and machine translation. In ACL, volume 1, pages 435–445. Shashi Narayan, Claire Gardent, Shay Cohen, and Anastasia Shimorina. 2017. Split and rephrase. In EMNLP 2017: Conference on Empirical Methods in Natural Language Processing, pages 617–627. Sergiu Nisioi, Sanja ˇStajner, Simone Paolo Ponzetto, and Liviu P Dinu. 2017. Exploring neural text simplification models. In ACL, volume 2, pages 85–91. Gustavo H Paetzold and Lucia Specia. 2016. Unsupervised lexical simplification for non-native speakers. In AAAI, pages 3761–3767. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL, pages 311– 318. Association for Computational Linguistics. Sarah E Petersen and Mari Ostendorf. 2007. Text simplification for language learners: a corpus analysis. In Workshop on Speech and Language Technology in Education. Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 529–535. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in neural information processing systems, pages 6830–6841. Advaith Siddharthan. 2006. Syntactic simplification and text cohesion. Research on Language and Computation, 4(1):77–109. Advaith Siddharthan. 2014. A survey of research on text simplification. ITL-International Journal of Applied Linguistics, 165(2):259–298. Lucia Specia. 2010. Translating from complex to simplified sentences. In Computational Processing of the Portuguese Language, pages 30–39. Springer. Sanja ˇStajner, Hannah Bechara, and Horacio Saggion. 2015. A deeper exploration of the standard pb-smt approach to text simplification and its evaluation. In ACL-IJCNLP, volume 2, pages 823–828. Sanja ˇStajner and Sergiu Nisioi. 2018. A Detailed Evaluation of Neural Sequence-to-Sequence Models for In-domain and Cross-domain Text Simplification. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Julien Tissier, Christophe Gravier, and Amaury Habrard. 2017. Dict2vec : Learning word embeddings using lexical dictionaries. In EMNLP, pages 254–263. Tong Wang, Ping Chen, John Rochford, and Jipeng Qiang. 2016. Text simplification using neural machine translation. In AAAI. Sander Wubben, Antal Van Den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 1015–1024. Association for Computational Linguistics. Jingjing Xu, SUN Xu, Qi Zeng, Xiaodong Zhang, Xuancheng Ren, Houfeng Wang, and Wenjie Li. 2018. Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 979–988. Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification research: New data can help. Transactions of the Association of Computational Linguistics, 3(1):283– 297. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. TACL, 4:401–415. Mark Yatskar, Bo Pang, Cristian Danescu-NiculescuMizil, and Lillian Lee. 2010. For the sake of simplicity: Unsupervised extraction of lexical simplifications from wikipedia. In NAACL-HLT, pages 365–368. Association for Computational Linguistics. Zhirui Zhang, Shuo Ren, Shujie Liu, Jianyong Wang, Peng Chen, Mu Li, Ming Zhou, and Enhong Chen. 2018. Style transfer as unsupervised machine translation. arXiv preprint arXiv:1808.07894. A Ablation Studies The following table shows results of the proposed system with ablations on adversarial loss (UNTSADV) and diversification loss (UNTS-DIV). System FE-diff SARI BLEU Word-diff UNTS+10K 10.45 35.29 76.13 2.38 UNTS-DIV+10K 11.32 35.24 75.59 2.61 UNTS-ADV+10K 10.32 35.08 76.19 2.64 UNTS 11.15 33.8 74.24 3.55 UNTS-DIV 14.15 34.38 68.65 3.46 UNTS-ADV 12.13 34.74 73.21 2.72 Table 6: UNTS-ADV does not use the adversarial loss, UNTS-DIV does not use the diversification loss. B Effects of Variation in Labeled Data Size The following table shows the effect of labeled data size on the performance of the system. We supplied the system with 2K, 5K, and 10K pairs of complex and simple sentences. From the trained models, models with similar word-diff are chosen for fair comparison. Our observation is that, with increasing data, BLEU as well as SARI increases. System FE-diff SARI BLEU Word-diff UNTS+10K 11.65 35.14 75.71 3.05 UNTS+5K 11.69 34.39 70.96 3.01 UNTS+2K 11.64 34.17 72.63 3.26 UNTS 11.15 33.8 74.24 3.55 Table 7: Effect of variation in labeled data considered as additional help during training the unsupervised systems.
2019
198
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2069–2078 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2069 Syntax-Infused Variational Autoencoder for Text Generation Xinyuan Zhang1∗, Yi Yang2∗, Siyang Yuan1, Dinghan Shen1, Lawrence Carin1 1Duke University 2ASAPP Inc. [email protected], [email protected] Abstract We present a syntax-infused variational autoencoder (SIVAE), that integrates sentences with their syntactic trees to improve the grammar of generated sentences. Distinct from existing VAE-based text generative models, SIVAE contains two separate latent spaces, for sentences and syntactic trees. The evidence lower bound objective is redesigned correspondingly, by optimizing a joint distribution that accommodates two encoders and two decoders. SIVAE works with long shortterm memory architectures to simultaneously generate sentences and syntactic trees. Two versions of SIVAE are proposed: one captures the dependencies between the latent variables through a conditional prior network, and the other treats the latent variables independently such that syntactically-controlled sentence generation can be performed. Experimental results demonstrate the generative superiority of SIVAE on both reconstruction and targeted syntactic evaluations. Finally, we show that the proposed models can be used for unsupervised paraphrasing given different syntactic tree templates. 1 Introduction Neural language models based on recurrent neural networks (Mikolov et al., 2010) and sequence-tosequence architectures (Sutskever et al., 2014) have revolutionized the NLP world. Deep latent variable modes, in particular, the variational autoencoders (VAE) (Kingma and Welling, 2014; Rezende et al., 2014) integrating inference models with neural language models have been widely adopted on text generation (Bowman et al., 2016; Yang et al., 2017; Kim et al., 2018), where the encoder and the decoder are modeled by long short-term memory ∗Part of this work was done when the first two authors were at Bloomberg. S NP NP DT The NN book SBAR WHNP IN that S NP PRP you VP VBP love VP VBZ is ADJP JJ good . . Figure 1: An example of a constituency tree structure. (LSTM) networks (Chung et al., 2014). For a random vector from the latent space representing an unseen input, the decoder can generate realisticlooking novel data in the context of a text model, making the VAE an attractive generative model. Compared to simple neural language models, the latent representation in a VAE is supposed to give the model more expressive capacity. Although syntactic properties can be implicitly discovered by such generative models, Shi et al. (2016) show that many deep structural details are still missing in the generated text. As a result of the absence of explicit syntactic information, generative models often produce ungrammatical sentences. To address this problem, recent works attempt to leverage explicit syntactic knowledge to improve the quality of machine translation (Eriguchi et al., 2016; Bastings et al., 2017; Chen et al., 2017) and achieve good results. Motivated by such success, we suggest that deep latent variable models for text generation can also benefit from the incorporation of syntactic knowledge. Instead of solely modeling sentences, we want to utilize augmented data by introducing an auxiliary input, a syntactic tree, to enrich the latent representation and make the generated sentences more grammatical and fluent. Syntactic trees can either be obtained from existing human-labeled 2070 trees or syntactically parsed sentences using well-developed parsers. An example of a constituency tree is shown in Figure 1. In this work, we remove leaf nodes and linearize the bracketed parse structure into a syntactic tree sequence to simplify the encoding and decoding processes. For example, the syntactic tree sequence for the sentence “The book that you love is good.” is (S(NP(NP(DT)(NN))(SBAR(WHNP(IN))(S(NP(PRP ))(VP(VBP)))))(VP(VBZ)(ADJP(JJ)))(.)). Given such data, we aim to train a latent variable model that jointly encodes and decodes a sentence and its syntactic tree. We propose a syntax-infused VAE model to help improve generation, by integrating syntactic trees with sentences. In contrast to the current VAEbased sentence-generation models, a key differentiating aspect of SIVAE is that we map the sentences and the syntactic trees into two latent representations, and generate them separately from the two latent spaces. This design decouples the semantic and syntactic representations and makes it possible to concentrate generation with respect to either syntactic variation or semantic richness. To accommodate the two latent spaces in one VAE framework, the evidence lower bound (ELBO) objective needs to be redesigned based on optimizing the joint log likelihood of sentences and syntactic trees. This new objective makes SIVAE a task-agnostic model, with two encoders and two decoders, so that it can be further used for other generative tasks. Two variants of SIVAE that differ in the forms of the prior distributions corresponding to the syntactic tree latent variables are presented. SIVAE-c captures dependencies between two latent variables by making the syntax prior conditioned on the sentence prior. During generation, we first sample a latent variable from the sentence latent space and then sample the syntactic tree latent variable depending on the sampled sentence latent variable. This process resembles how humans write: think about substances like entities and topics first, then realize with a specific syntactic structure. We further propose SIVAE-i assuming the two priors are independent, and change the ELBO of the joint log likelihood correspondingly. This independence assumption manifests syntactically-controlled sentence generation as it allows to alter the syntactic structure, desirable for related tasks like paraphrase generation. Given a sentence and a syntactic tree template, the model produces a paraphrase of the sentence whose syntax conforms to the template. Our SIVAE-based paraphrasing network is purely unsupervised, which makes it particularly suitable for generating paraphrases in low-resource languages or types of content. The experiments are conducted on two datasets: one has trees labeled by humans and the other has trees parsed by a state-of-the-art parser (Kitaev and Klein, 2018). Other than employing the standard language modeling evaluation metrics like perplexity, we also adopt the targeted syntactic evaluation (Marvin and Linzen, 2018) to verify whether the incorporation of syntactic trees improves the grammar of generated sentences. Experiments demonstrate that the proposed model improves the quality of generated sentences compared to other baseline methods, on both the reconstruction and grammar evaluations. The proposed methods show the ability for unsupervised paraphrase generation under different syntactic tree templates. Our contributions are four-fold: i) We propose a syntax-infused VAE that integrates syntactic trees with sentences, to grammatically improve the generated sentences. ii) We redesign the ELBO of the joint log likelihood, to accommodate two separate latent spaces in one VAE framework, for two SIVAE model variants based on different intuitions, which can be further used for other applications. iii) We evaluate our models on data with humanconstituted trees or parsed trees, and yield promising results in generating sentences with better reconstruction loss and less grammatical errors, compared to other baseline methods. iv) We present an unsupervised paraphrasing network based on SIVAE-i that can perform syntactically controlled paraphrase generation. 2 Methodology Given a sentence x and its corresponding syntactic tree y, the goal is to jointly encode x and y into latent representations zx ∈Rd and zy ∈Rd, and then decode them jointly from the two latent spaces. We employ the VAE framework such that realisticlooking novel sentences can be generated with randomly sampled latent representations. However, current VAE-based language models cannot accommodate two separate latent spaces for zx and zy. To incorporate x, y, zx, and zy in one VAE framework, the objective needs to be redesigned to optimize the log joint likelihood log p(x, y). We propose two model variants of SIVAE. The first 2071 L S T M L S T M L S T M … L S T M L S T M L S T M … L S T M L S T M L S T M … L S T M L S T M L S T M … [ℎ|$|;ℎ&] 𝑥& 𝑥) 𝑥|$| 𝑥& 𝑥) 𝑥|$| ℎ|*| 𝑐|*| 𝑦|*| 𝑦|*| 𝑦) 𝑦& 𝑦) 𝑦& Linear Linear 𝜇$ 𝜎$ 𝒩 𝑧$ [ℎ|*|;ℎ&] Linear Linear 𝜇* 𝜎* 𝒩 𝑧* Sentence Encoder Sentence Decoder Tree Encoder Tree Decoder Sentence Vocabulary Tree Vocabulary MLP MLP 𝒩 𝜇2 𝜎2 Figure 2: Block diagram of the proposed SIVAE model encoding and decoding sentences and their syntactic trees jointly. The prior network (dashed lines) is used only for the sampling stage of SIVAE-c. model (SIVAE-c; Section 2.1), directly capturing the dependencies between zx and zy, presumes that semantic information should influence syntax structure. During the sampling stage, the prior for zy is drawn based on zx from a conditional prior network p(zy|zx); zx implicitly encodes the subject of the sentence, and zy encodes the corresponding syntax. Although this model has robust performance on generation, it doesn’t allow us to syntactically control the generated sentences by freely changing the syntactic tree template in zy. Thus we propose SIVAE-i (Section 2.2), which generates sentences and syntactic trees assuming the priors p(zx) and p(zy) are independent. The entire architecture is shown in Figure 2. 2.1 Modeling Syntax-Semantics Dependencies Since the syntax of a sentence is influenced by the semantics, especially when the content is long, we first propose a generative model to exploit the dependencies between zx and zy, through a conditional prior network pψ(zy|zx). Formally, SIVAEc models the joint probability of the sentence and its syntactic tree: p(x, y) = Z dzx Z dzy p(x|y, zx)p(y|zx, zy)· p(zy|zx)p(zx)dzydzx, (1) where the prior over zx is the isotropic Gaussian p(zx) = N(0, I). We define q(·) to be the variational posterior distributions that approximate the true posterior distributions. The model is trained by maximizing the lower bound of the log likelihood log p(x, y) ≥L(x, y; θ, φ, ψ) = (2) Eqφ(zx|x) log pθ(x|y, zx) −KL[qφ(zx|x)||p(zx)] + Eqφ(zy|y,zx) log pθ(y|zy) −KL[qφ(zy|y, zx)||pψ(zy|zx)], where ψ, φ, and θ are the parameters of the prior network, the recognition networks, and the generation networks, respectively. We apply the reparameterize trick to yield a differentiable unbiased estimator of the lower bound objective. Conditional Prior Network The key to SIVAEc is the conditional prior which is used to model the dependencies between the sentence latent variable zx and the syntactic tree latent variable zy. Given zx, the prior for zy is sampled from a conditional probability pψ(zy|zx) modeled by a multivariate Gaussians N(µ′, σ′2I). The parameters of the Gaussian distribution are computed from zx with a conditional prior network parameterized by ψ. In particular, µ′ and σ′2 are the outputs of multilayer perceptron (MLP) networks taking zx as the input. Recognition Networks To differentiate through the sampling stage z ∼qφ(z|x), the VAE encoder qφ(zx|x) is also assumed to be a Gaussian distribution N(µx, Σx), where µ(x) and diag(Σ(x)) are the outputs of feedforward networks taking x as the input. The recognition network consists of a bidirectional LSTM encoder to produce a sentence embedding for x and two linear networks to transform the embedding to the Gaussian parameters. The Kullback-Leibler (KL) divergence 2072 between qφ(zx|x) and the isotropic Gaussian prior p(zx) is KL(qφ(zx|x)∥p(zx)) = 1 2[−log |Σx| −d + tr(Σx) + µT x µx]. (3) So we only need to model µx and the diagonal of Σx to compute the KL divergence. To reconcile the conditional prior pψ(zy|zx), the variational posterior qφ(zy|y, zx) = N(µy, σ2 yI), also depends on the latent variable zx. µy and σ2 y are obtained from a recognition network that contains a bidirectional LSTM encoder, producing a syntactic tree embedding, and two linear networks, taking the embedding and zx as inputs. The KL divergence is then given by KL(qφ(zy|y, zx)∥pψ(zy|zx)) = 1 2[log |σ′2I| −log |σ2 yI| −d + tr( σ2 yI σ′2I) + (µ′ −µy)T σ′−2I(µ′ −µy)]. (4) Generation Networks We employ an LSTM to generate y from pθ(y|zy). A word vy is selected by computing the probability of yt = vy conditioned on previously generated words y−t and zy p(yt = vy|y−t, zy) ∝exp((vT y Wyhy t )), (5) where hy t is the current hidden states of the LSTM tree decoder hy t = LSTM(zy, e(yt−1), hy t−1, cy t−1). (6) To generate x from pθ(x|y, zx), we modify the generative model in GNMT (Shah and Barber, 2018). First, the last hidden states hy |y| and cy |y| in (6) are directly used as the generated syntactic tree y, where |y| is the length of y. Then we use another LSTM for sentence generation, hx t = LSTM(zx, e(xt−1), hy |y|, hx t−1, cx t−1). (7) The conditional probabilities of xt = vx for t = 1, · · · , |x| are computed as p(xt = vx|x−t, zx, y) ∝exp((vT x Wxhx t )). (8) In this way, the generated sentence is conditioned on zx and the generated syntactic tree y. SIVAEc selects possible syntactic tree templates for a given sentence latent variable, but the syntactic tree template cannot be freely determined. 2.2 Syntactically-Controlled Sentence Generation In order to freely change the syntactic tree template embedded in zy, we propose an alternative model assuming the independence of two priors. Let priors zx and zy be independent random variables drawn from N(0, I). The variational posteriors qφ(zx|x) and qφ(zy|y) follow Gaussian distributions parameterized by the outputs of feedforward networks, whose inputs are x and y. The model is trained by maximizing the lower bound objective log p(x, y) ≥L(x, y; θ, φ) = (9) Eqφ(zx|x) log pθ(x|y, zx) −KL[qφ(zx|x)∥p(zx)] + Eqφ(zy|y) log pθ(y|zy) −KL[qφ(zy|y)∥p(zy)]. Since y and zx are assumed to be independent when computing the joint probability p(x, y), we seek to minimize the mutual information I(y; zx) during training. The recognition networks and the generation networks of SIVAE-i are similar to those adopted in SIVAE-c, so we omit them for brevity. 3 Unsupervised Paraphrasing Paraphrases are sentences with the same meaning but different syntactic structures. SIVAE allows us to execute syntax transformation, producing the desired paraphrases with variable syntactic tree templates. The syntactically controlled paraphrase generation is inspired by Iyyer et al. (2018); the difference is that our SIVAE-based syntactic paraphrase network is purely unsupervised. Unsupervised paraphrasing can be performed using both SIVAE-c and SIVAE-i. One way to generate paraphrases is to perform syntactically controlled paraphrase generation using SIVAE-i. The latent representations of an input sentence zx and a syntactic tree template zy are fed into SIVAE-i, and the syntax of the generated sentence conforms with the explicitly selected target template. However, linearized syntactic sequences are relatively long (as shown in Table 1) and long templates are more likely to mismatch particular input sentences, which may result in nonsensical paraphrase outputs. Therefore, we use simplified syntactic sequences as templates, by taking the top two levels of the linearized constituency trees. The paraphrase generative process is: 1. Encode the original sentence to zx; 2073 Dataset Train Test Valid Ave_s Max_s Voc_s Tree Type Ave_t Max_t Voc_t PTB 39366 4921 4921 25 271 24699 Golden 113 1051 1272 wiki90M 71952 8995 8994 28 318 28907 Parsed 119 1163 387 Table 1: Statistics of the two datasets used in this paper. Ave_s/ Ave_t, Max_s/ Max_t, and Voc_s/ Voc_t denote the average length, maximum length, and vocabulary size for sentences/ tree sequences correspondingly. 2. Select and encode a syntactic template into zy; 3. Generate the reconstructed syntactic sequence y from p(y|zy); 4. Generate the paraphrase of the original sentence that conforms to y from p(x|y, zx). We can also use a trained SIVAE-c to generate paraphrases. The paraphrase generation process is similar to sampling from a standard VAE with various tempera. The difference is that SIVAE-c first selects possible syntactic tree templates using the conditional prior network pψ(zy|zx) then generates paraphrases based on the syntactic template and the latent variable. 4 Related Work Syntax-Aware Neural Text Generation The ability to generate sentences is core to many NLP tasks, such as machine translation (Bahdanau et al., 2015), summarization (Rush et al., 2015), and dialogue generation (Vinyals and Le, 2015). Recent works have shown that neural text generation can benefit from the incorporation of syntactic knowledge (Shen et al., 2018; Choe and Charniak, 2016). Sennrich and Haddow (2016) propose to augment each source word representation with its corresponding part-of-speech tag, lemmatized form and dependency label; Eriguchi et al. (2016) and Bastings et al. (2017) utilize a tree-based encoder and a graph convolutional network encoder respectively to embed the syntactic parse trees as part of the source sentence representations; Chen et al. (2017) model source-side syntactic trees with a bidirectional tree encoder and tree-coverage decoder; Eriguchi et al. (2017) implicitly leverage linguistic prior by treating syntactic parsing as an auxiliary task. However, most of these syntax-aware generation works only focus on neural machine translation. Deep Latent Variable Models Deep latent variable models that combine the complementary strengths of latent variable models and deep learning have drawn much attention recently. Generative adversarial networks (Goodfellow et al., 2014) and variational autoencoders (Kingma and Welling, 2014) are the two families of deep generative models that are widely adopted in applications. As VAEs allow discrete generation from a continuous space, they have been a popular variant for NLP tasks including text generation (Bowman et al., 2016; Yang et al., 2017; Xu and Durrett, 2018; Shen et al., 2019; Wang et al., 2019). The flexibility of VAEs also enables adding conditions during inference to perform controlled language generation (Hu et al., 2017; Zhao et al., 2017). Divergent from these VAE-based text generation models, our work decouples the latent representations corresponding to the sentence and its syntactic tree respectively. Paraphrase Generation Due to the similarity between two tasks, neural machine-translationbased models can often be utilized to achieve paraphrase generation (Hasan et al., 2016; Mallinson et al., 2017). Recently, Iyyer et al. (2018) proposed to syntactically control the generated paraphrase and Gupta et al. (2018) generate paraphrases in a deep generative architecture. However, all these methods assume the existence of some parallel paraphrase corpora while unsupervised paraphrase generation has been little explored. 5 Experiments We conduct our experiments on two datasets: sentence-level Penn Treebank (Marcus et al., 1993) with human-constituted parse trees and a 90 million word subset of Wikipedia (Gulordava et al., 2018) with parsed trees. When the decoder is too strong, VAE suffers from posterior collapse where the model learns to ignore the latent variable (Bowman et al., 2016). To avoid posterior collapse, KLterm annealing and dropping out words during decoding are employed for training in this work. We also tried an advanced method replacing Gaussian priors with von Mises-Fisher priors (Xu and Durrett, 2018) to prevent KL collapse, but the results 2074 Model PTB wiki90M Standard Inputless Standard Inputless PPL NLL KL PPL NLL KL PPL NLL KL PPL NLL KL KN5 145 132 593 169 141 141 588 182 LSTM-LM 110 124 520 165 105 133 521 179 VAE 112 125 2 317 153 13 106 133 5 308 164 22 SIVAE-c 98(1.6) 121(53) 5(0.5) 286(2.4) 150(99) 17(1.3) 94(1.6) 130(56) 12(1.0) 278(2.3) 161(99) 29(2.4) SIVAE-i 90(1.7) 119(60) 9(1.0) 261(2.6) 147(108) 24(2.5) 89(1.7) 128(63) 16(1.9) 256(2.4) 158(104) 36(5.1) Table 2: Language modeling results on testing sets of PTB and wiki90M. For two SIVAE models, the syntactic tree sequence reconstruction scores are shown in parenthesis alongside the sentence reconstruction scores. Lower is better for PPL and NLL. The best results are in bold. are about the same. To discover whether the incorporation of syntactic trees is helpful for sentence generation, we compare our two versions of SIVAE with three baselines that do not utilize syntactic information: a 5gram Kneser-Ney language model (KN5) (Heafield et al., 2013), an LSTM language model (LSTMLM) (Sundermeyer et al., 2012), and a standard VAE (Bowman et al., 2016) using an LSTM-based encoder and decoder. Experimental results of language modeling are evaluated by the reconstruction loss using perplexity and the targeted syntactic evaluation proposed in (Marvin and Linzen, 2018). In section 5.3, we show the unsupervised paraphrase generation results. Datasets We use two datasets in this paper. For sentence-level Penn Treebank (PTB), the syntactic trees are labeled by humans (i.e. “gold-standard” trees). For Wikipedia-90M (wiki90M), which does not contain human-generated trees, we first feed the sentences into a state-of-the-art constituency parser (Kitaev and Klein, 2018), and then use the parsed trees as syntactic information for our model. Further, we replace (low-frequency) words that appear only once in both datasets with the <unk> token. Statistics about the two datasets are shown in Table 1. As we can see, the linearized sequences are much longer than sentences. The vocabulary of trees sequences is much smaller than the vocabulary of sentences; and golden trees have larger vocabulary than parsed trees. Settings The parameters are fine-tuned on the validation set. Our implementation of SIVAE uses one-layer bi-directional LSTM architectures for both encoders, and one-layer unidirectional LSTM architectures for both decoders. The size of hidden units in the LSTM is 600 and the size of word embeddings is 300. The latent variable size is set to 150 for both sentences and their syntactic trees. The hidden units size of the MLP in the conditional prior network is 400. We also tried to use different model sizes for sentences and syntactic trees but the results are about the same and the performance even get worse when the difference of the model sizes is too big. We use SGD for optimization, with a learning rate of 0.0005. The batch size is 32 and the number of epochs is 10. The word dropout rate during decoding is 0.4. For KL annealing, the initial weights of the KL terms are 0, and then we gradually increase the weights as training progresses, until they reach the KL threshold of 0.8; the rate of this increase is set to 0.5 with respect to the total number of batches. 5.1 Language Modeling Results We explore two settings for the decoders: standard and inputless. In the standard setting, the input to the LSTM decoder is the concatenation of the latent representation z and the previous ground truth word. A powerful decoder usually results in good reconstruction in this setting but the model may ignore the latent variable. In the inputless setting, the decoder purely relies on the latent representations without any use of prior words, so that the model is driven to learn high-quality latent representations of the sentences and syntactic trees. The language-modeling results, on testing sets evaluated by negative log likelihood (NLL) and perplexity (PPL), are shown in Table 2. SIVAEs outperform all other baselines on both datasets, demonstrating the explicit incorporation of syntactic trees helps with the reconstruction of sentences. The performance boost on the wiki90M dataset also shows that syntactic trees parsed by a welldeveloped parser can serve the same function as human-constituted trees, for our model to utilize syntactic information; this underscores how mature parser technology may be leveraged in text generation. Between the two proposed methods, SIVAE-i is better at reconstructing sentences while SIVAE-c is better at reconstructing syntactic trees. In the 2075 standard setting, VAE performs almost the same as the LSTM language model, possibly because the strong LSTM decoder plays a dominant role when it uses prior words, so the VAE becomes similar to an LSTM language model. Furthermore, the KL divergence of the proposed models indicate that SIVAE is better at avoiding posterior collapse, so the LSTM sentence decoder can take advantage of the encoded latent variable as well as the previously generated syntactic tree. In the inputless setting, we see that VAE contains a significantly larger KL term and shows substantial improvement over KN5 and LSTM language models. SIVAEs further reduces PPL from 317 to 261 on PTB and from 308 to 256 on wiki90M, compared to VAE. 5.2 Targeted Syntactic Evaluation We adopt targeted syntactic evaluation (Marvin and Linzen, 2018) to examine whether the proposed methods improve the grammar of generated sentences. The idea is to assign a higher probability for generating the grammatical sentence than the ungrammatical one, given a pair of sentences that only differ in grammar. There are three types of sentence pairs used in this work. Subject-verb agreement (SVA): Third-person present English verbs need to agree with the number of their subjects. For example, simple SVA: (a). The author laughs. (b). *The author laugh. Reflexive anaphoras (RA): A reflective pronoun such as himself needs to agree in number (and gender) with its antecedent. For example, simple RA: (a). The senators embarrassed themselves. (b). *The senators embarrassed herself. Negative polarity items (NPI): Words like any and ever that can only be used in the scope of negation are negative polarity items. For example, simple NPI: (a). No students have ever lived here. (b). *Most students have ever lived here. In the above examples, we expect the probability of generating (a) to be higher than the probability of generating (b). However, it is trivial to identify these simple test pairs with simple syntax. Thus we include complex longer test pairs with greater Model SVA RA NPI S C S C S C Humans 0.96 0.85 0.96 0.87 0.98 0.81 KN5 0.79 0.50 0.50 0.50 0.50 0.50 LSTM-LM 0.94 0.56 0.83 0.55 0.50 0.50 VAE 0.94 0.57 0.84 0.57 0.51 0.50 SIVAE-c 0.97 0.75 0.89 0.64 0.57 0.52 SIVAE-i 0.95 0.71 0.88 0.63 0.56 0.52 Table 3: Accuracy of targeted syntactic evaluation for each grammar test case. S and C denote simple testing pairs and complex testing pairs. The total number of test sentences is 44800. Models are trained on wiki90M. The best results are in bold. depth in relative clauses, identifying which requires more understanding of the syntactic structure. The accuracy per grammar test case of each method is shown in Table 3. Human scores on these test pairs in (Marvin and Linzen, 2018) are also shown for reference. SIVAE outperforms other baselines on grammar testing cases, demonstrating the explicit incorporation of syntactic trees helps with the grammar of generated sentences. For simple SVA testing pairs, SIVAE-c has a better score than humans. Even for a difficult grammar test like NPI, our methods still makes significant progress compared to other baselines, whose scores show no syntactic understanding of these sentences. From Table 3, note that KN5 can only identify simple SVA pairs. In addition, VAE has similar syntactic performance as a LSTM language model, which verifies the results in reconstruction. Between the two proposed methods, SIVAE-i makes more grammar mistakes than SIVAE-c, although it has better perplexity in Table 2. This is because SIVAE-c considers the dependency between the sentence prior and the syntactic tree prior, so it can more efficiently prevent the mismatch between two latent variables. In other words, SIVAE-c learns more robust syntactic representations, but this advantage is not reflected on the reconstruction evaluation. 5.3 Unsupervised Paraphrasing Results The proposed method is used for generating paraphrases by implicitly selecting (SIVAE-c) or explicitly changing (SIVAE-i) the syntactic tree templates. Our model is not trained on a paraphrase corpora, which makes it a purely unsupervised paraphrasing network. Syntactically Controlled Paraphrasing SIVAE-i as the syntactically controlled para2076 Template Paraphrase original the discovery of dinosaurs has long been accompanied by a legend . ( SBARQ ( NP ) ( VP ) ( , ) ( SQ ) ( ? ) ) the discovery of dinosaurs has been a legend , is it ? ( S ( “ ) ( NP ) ( VP ) ( ” ) ( NP ) ( VP ) ( . ) ) “ the discovery of dinosaurs is a legend ” he said . ( S ( VP ) ( , ) ( NP ) ( . ) ) having been accompanied , the unk lengend . original in 1987 a clock tower and a fountain were erected at council unk monument . ( S ( PP ) ( PP ) ( NP ) ( VP ) ( . ) ) in 1987 at council a fountain was erected . ( S ( VP ) ( NP ) ( CC ) ( NP ) ( PP ) ( . ) ) build a clock and a fountain at council unk unk . ( S ( NP ) ( ; ) ( S ) ( PP ) ( . ) ) a clock p ; he shops everything on the fountain at unk unk . Table 4: Examples of syntactically controlled paraphrases generated by SIVAE-i. We show two successful and one failed (in blue) generations with different templates for each input sentence. Ori the new york times has been one of the best selling newspapers in america . Gen1 the new york times also has been used as american best selling newspaper . Gen2 the new york times also has been used as a “ unk ” that sells in america . Gen3 the new york times also has been used as the best “ unk ” selling in america . Table 5: An example of paraphrases generated by SIVAE-c. phrasing network is trained on sentences and their simplified syntactic sequences of PTB and wiki90M dataset. Table 4 shows some example paraphrases generated by SIVAE-i using different syntactic templates. We see that SIVAE-i has the ability to syntactically control the generated sentences that conform to the target syntactic template. The examples are well-formed, semantically sensible, and grammatically correct sentences that also preserve semantics of the original sentences. However, the model can generate nonsensical outputs, like the failed cases in Table 4, when the target template mismatches the input sentence. Paraphrasing with Different Tempera We further perform paraphrasing using SIVAE-c with different tempera. Table 5 shows example paraphrases generated by SIVAE-i. We see that SIVAE-c can generate grammatical sentences that are relevant to the original sentence. However, the generated paraphrases are very similar, indicating that the variance of the conditional prior network is small. In other words, given a sentence latent representation, the range for SIVAE-c selecting a possible syntactic tree representation is small, so it tends to generate similar paraphrases. Qualitative Human Evaluation We adopt similar human evaluation metrics as in (Gupta et al., Model PTB wiki90M Rele Read Div Rele Read Div VAE 2.63 3.07 2.77 3.03 3.20 2.60 SIVAE-c 2.93 3.47 2.80 3.27 3.67 2.73 SIVAE-i 3.00 3.30 3.13 3.37 3.53 3.20 Table 6: Human evaluation results on Relevance, Readability, and Diversity of generated paraphrases. 2018) for generated paraphrases. For 20 original sentences, we collect 5 paraphrases for each sentence (100 in total) generated by SIVAE-c or SIVAE-i using 5 different syntactic templates. The models are trained on PTB and wiki90M. Three aspects are verified in human evaluation: Relevance with the original sentence, Readability w.r.t. the syntax of generated sentences, and Diversity of different generations for the same original sentence. Three human evaluators assign a score on a scale of 1-5 (higher is better) for each aspect per generation. The human evaluation results for unsupervised paraphrase generation using standard VAE, SIVAEi and SIVAE-c are shown in Table 6. SIVAE-c has the best scores and standard VAE has the worst scores at the readability of generated sentences, which further verifies that syntactic information is helpful for sentence generation. Paraphrases generated by SIVAE-i are more diverse under different syntactic templates, compared to SIVAE-c and standard VAE. All three models show better paraphrasing performance on the wiki90M dataset. 5.4 Continuity of Latent Spaces We further test the continuity of latent spaces in our model. Two vectors zA and zB are randomly sampled from the sentence latent space of SIVAEc. Table 7 shows generated sentences based on intermediate points between zA and zB. We see the transitions are smooth and the generations are grammatical, verifying the continuity of the sen2077 A in january 2014 , the unk announced that one player would be one of the first two heroes . • in january 2014 , he was one of the first two players to be the most successful . • until the end of the first half of the series , he has played the most reported time . • until the end of world war i , he was the first player in the united states . • there are also a number of other members in the american war association . B there are also a number of other american advances , such as the unk unk of the american association . Table 7: Intermediate sentences are generated between two random points in the latent space of SIVAE-c. tence latent space. The syntactic structure remains consistent in neighborhoods along the path, indicating the continuity in the syntactic tree latent space. 6 Conclusion We present SIVAE, a novel syntax-infused variation autoencoder architecture for text generation, leveraging constituency parse tree structure as the linguistic prior to generate more fluent and grammatical sentences. The new lower bound objective accommodates two latent spaces, for jointly encoding and decoding sentences and their syntactic trees. The first version of SIVAE exploits the dependencies between two latent spaces, while the second version enables syntactically controlled sentence generation by assuming the two priors are independent. Experimental results demonstrate the incorporation of syntactic trees is helpful for reconstruction and grammar of generated sentences. In addition, SIVAE can perform unsupervised paraphrasing with different syntactic tree templates. Acknowledgments This research was supported in part by DARPA, DOE, NIH, ONR and NSF. We thank Kazi Shefaet Rahman, Ozan Irsoy, Igor Malioutov and other people in the Bloomberg NLP platform team for their feedback on the initial idea of the work. We thank the ACL reviewers for their helpful feedback. This work also benefitted from discussions with Tao Lei and Lili Yu. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR). Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP). Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of the Conference on Natural Language Learning (CoNLL). Huadong Chen, Shujian Huang, David Chiang, and Jiajun Chen. 2017. Improved neural machine translation with a syntax-aware encoder and decoder. Proceedings of the Association for Computational Linguistics (ACL). Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. Proceedings of Empirical Methods in Natural Language Processing (EMNLP). Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. Neural Information Processing Systems (NIPS) Workshop on Deep Learning. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In Proceedings of the Association for Computational Linguistics (ACL). Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to parse and translate improves neural machine translation. In Proceedings of the Association for Computational Linguistics (ACL). Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Neural Information Processing Systems (NIPS). Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for paraphrase generation. In Proceedings of the National Conference on Artificial Intelligence (AAAI). Sadid A Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, Oladimeji Farri, et al. 2016. Neural paraphrase generation with stacked residual lstm networks. In Proceedings the International Conference on Computational Linguistics (COLING). 2078 Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H Clark, and Philipp Koehn. 2013. Scalable modified kneserney language model estimation. In Proceedings of the Association for Computational Linguistics (ACL), volume 2. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In Proceedings of the International Conference on Machine Learning (ICML). Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). Yoon Kim, Sam Wiseman, Andrew C Miller, David Sontag, and Alexander M Rush. 2018. Semiamortized variational autoencoders. Proceedings of the International Conference on Machine Learning (ICML). Diederik P Kingma and Max Welling. 2014. Autoencoding variational bayes. In Proceedings of the International Conference on Learning Representations (ICLR). Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. Proceedings of the Association for Computational Linguistics (ACL). Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics. Mitchell P Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2). Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP). Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. Proceedings of the International Conference on Machine Learning (ICML). Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP). Rico Sennrich and Barry Haddow. 2016. Linguistic input features improve neural machine translation. In Proceedings of the First Conference on Machine Translation: Volume 1, Research Papers. Harshil Shah and David Barber. 2018. Generative neural machine translation. In Neural Information Processing Systems (NIPS). Dinghan Shen, Asli Celikyilmaz, Yizhe Zhang, Liqun Chen, Xin Wang, Jianfeng Gao, and Lawrence Carin. 2019. Towards generating long and coherent text with multi-level latent variable models. arXiv preprint arXiv:1902.00154. Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. 2018. Neural language modeling by jointly learning syntax and lexicon. Proceedings of the International Conference on Learning Representations (ICLR). Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural mt learn source syntax? In Proceedings of Empirical Methods for Natural Language Processing (EMNLP). Martin Sundermeyer, Ralf Schlüter, and Hermann Ney. 2012. Lstm neural networks for language modeling. In Thirteenth annual conference of the international speech communication association. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Neural Information Processing Systems (NIPS). Oriol Vinyals and Quoc V. Le. 2015. A neural conversational model. Proceedings of the International Conference on Machine Learning (ICML) Workshop on Deep Learning. Wenlin Wang, Zhe Gan, Hongteng Xu, Ruiyi Zhang, Guoyin Wang, Dinghan Shen, Changyou Chen, and Lawrence Carin. 2019. Topic-guided variational autoencoders for text generation. arXiv preprint arXiv:1903.07137. Jiacheng Xu and Greg Durrett. 2018. Spherical latent spaces for stable variational autoencoders. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP). Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. 2017. Improved variational autoencoders for text modeling using dilated convolutions. In Proceedings of the International Conference on Machine Learning (ICML). Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the Association for Computational Linguistics (ACL).
2019
199
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 12–21 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 12 Incremental Transformer with Deliberation Decoder for Document Grounded Conversations Zekang Li†♦, Cheng Niu‡, Fandong Meng‡∗, Yang Feng♦, Qian Li♠, Jie Zhou‡ †Dian Group, School of Electronic Information and Communications Huazhong University of Science and Technology ‡Pattern Recognition Center, WeChat AI, Tencent Inc, China ♦Key Laboratory of Intelligent Information Processing Institute of Computing Technology, Chinese Academy of Sciences ♠School of Computer Science and Engineering, Northeastern University, China [email protected], {chengniu,fandongmeng,jiezhou}@tencent.com [email protected], [email protected] Abstract Document Grounded Conversations is a task to generate dialogue responses when chatting about the content of a given document. Obviously, document knowledge plays a critical role in Document Grounded Conversations, while existing dialogue models do not exploit this kind of knowledge effectively enough. In this paper, we propose a novel Transformerbased architecture for multi-turn document grounded conversations. In particular, we devise an Incremental Transformer to encode multi-turn utterances along with knowledge in related documents. Motivated by the human cognitive process, we design a two-pass decoder (Deliberation Decoder) to improve context coherence and knowledge correctness. Our empirical study on a real-world Document Grounded Dataset proves that responses generated by our model significantly outperform competitive baselines on both context coherence and knowledge relevance. 1 Introduction Past few years have witnessed the rapid development of dialogue systems. Based on the sequenceto-sequence framework (Sutskever et al., 2014), most models are trained in an end-to-end manner with large corpora of human-to-human dialogues and have obtained impressive success (Shang et al., 2015; Vinyals and Le, 2015; Li et al., 2016; Serban et al., 2016). While there is still a long way for reaching the ultimate goal of dialogue systems, which is to be able to talk like humans. And one of the essential intelligence to achieve this goal is the ability to make use of knowledge. ∗Fandong Meng is the corresponding author of the paper. This work was done when Zekang Li was interning at Pattern Recognition Center, WeChat AI, Tencent. There are several works on dialogue systems exploiting knowledge. The Mem2Seq (Madotto et al., 2018) incorporates structured knowledge into the end-to-end task-oriented dialogue. Liu et al. (2018) introduces factmatching and knowledge-diffusion to generate meaningful, diverse and natural responses using structured knowledge triplets. Ghazvininejad et al. (2018), Parthasarathi and Pineau (2018), Yavuz et al. (2018), Dinan et al. (2018) and Lo and Chen (2019) apply unstructured text facts in open-domain dialogue systems. These works mainly focus on integrating factoid knowledge into dialogue systems, while factoid knowledge requires a lot of work to build up, and is only limited to expressing precise facts. Documents as a knowledge source provide a wide spectrum of knowledge, including but not limited to factoid, event updates, subjective opinion, etc. Recently, intensive research has been applied on using documents as knowledge sources for QuestionAnswering (Chen et al., 2017; Huang et al., 2018; Yu et al., 2018; Rajpurkar et al., 2018; Reddy et al., 2018). The Document Grounded Conversation is a task to generate natural dialogue responses when chatting about the content of a specific document. This task requires to integrate document knowledge with the multi-turn dialogue history. Different from previous knowledge grounded dialogue systems, Document Grounded Conversations utilize documents as the knowledge source, and hence are able to employ a wide spectrum of knowledge. And the Document Grounded Conversations is also different from document QA since the contextual consistent conversation response should be generated. To address the Document Grounded Conversation task, it is important to: 1) Exploit document knowledge which are relevant to the 13 conversation; 2) Develop a unified representation combining multi-turn utterances along with the relevant document knowledge. In this paper, we propose a novel and effective Transformer-based (Vaswani et al., 2017) architecture for Document Grounded Conversations, named Incremental Transformer with Deliberation Decoder. The encoder employs a transformer architecture to incrementally encode multi-turn history utterances, and incorporate document knowledge into the the multi-turn context encoding process. The decoder is a two-pass decoder similar to the Deliberation Network in Neural Machine Translation (Xia et al., 2017), which is designed to improve the context coherence and knowledge correctness of the responses. The first-pass decoder focuses on contextual coherence, while the second-pass decoder refines the result of the firstpass decoder by consulting the relevant document knowledge, and hence increases the knowledge relevance and correctness. This is motivated by human cognition process. In real-world human conversations, people usually first make a draft on how to respond the previous utterance, and then consummate the answer or even raise questions by consulting background knowledge. We test the effectiveness of our proposed model on Document Grounded Conversations Dataset (Zhou et al., 2018). Experiment results show that our model is capable of generating responses of more context coherence and knowledge relevance. Sometimes document knowledge is even well used to guide the following conversations. Both automatic and manual evaluations show that our model substantially outperforms the competitive baselines. Our contributions are as follows: • We build a novel Incremental Transformer to incrementally encode multi-turn utterances with document knowledge together. • We are the first to apply a two-pass decoder to generate responses for document grounded conversations. Two decoders focus on context coherence and knowledge correctness respectively. 2 Approach 2.1 Problem Statement Our goal is to incorporate the relevant document knowledge into multi-turn conversations. Utterance k-1 Utterance k-2 Utterance k Document k-2 Incremental Transformer Incremental Transformer Incremental Transformer Second-pass Decoder Self-Attentive Encoder Self-Attentive Encoder Document k-1 Self-Attentive Encoder Document k Self-Attentive Encoder ŏ ŏ ŏ Utterance k+1 First-pass Decoder Document k+1 Self-Attentive Encoder First-pass output Self-Attentive Encoder Deliberation Decoder Incremental Transformer Encoder Figure 1: The framework of Incremental Transformer with Deliberation Decoder for Document Grounded Conversations. Formally, let U = u(1), ..., u(k), ..., u(K) be a whole conversation composed of K utterances. We use u(k) = u(k) 1 , ..., u(k) i , ..., u(k) I to denote the k-th utterance containing I words, where u(k) i denotes the i-th word in the k-th utterance. For each utterance u(k), likewise, there is a specified relevant document s(k) = s(k) 1 , ..., s(k) j , ..., s(k) J , which represents the document related to the kth utterance containing J words. We define the document grounded conversations task as generating a response u(k+1) given its related document s(k+1) and previous k utterances U≤k with related documents S≤k, where U≤k = u(1), ..., u(k) and S≤k = s(1), ..., s(k). Note that s(k), s(k+1), ..., s(k+n) may be the same. Therefore, the probability to generate the response u(k+1) is computed as: P(u(k+1)|U≤k, S≤k+1; θ) = QI i=1 P(uk+1 i |U≤k, S≤k+1, u(k+1) <i ; θ) (1) where u(k+1) <i = u(k+1) 1 , ..., u(k+1) i−1 . 2.2 Model Description Figure 1 shows the framework of the proposed Incremental Transformer with Deliberation De14 Utterance Embedding Knowledge Attention Self-Attention Context Attention Feed-Forward Target Embedding Self-Attention Context Attention Utterance Attention Feed-Forward Target Embedding Self-Attention Knowledge Attention First-Pass Attention Feed-Forward Document/ Utterance Embedding Feed-Forward Self-Attention Target Embedding Self-Attention Context Attention Knowledge Attention Feed-Forward (a) (c) (d) (e) Utterance Embedding Knowledge Attention Self-Attention Feed-Forward (b) Softmax Softmax Softmax (1) (2) Figure 2: (1) Detailed architecture of model components. (a) The Self-Attentive Encoder(SA). (b) Incremental Transformer (ITE). (c) Deliberation Decoder (DD). (2) Simplified version of our proposed model used to verify the validity of our proposed Incremental Transformer Encoder and Deliberation Decoder. (d) Knowledge-Attention Transformer(KAT). (e) Context-Knowledge-Attention Decoder (CKAD). coder. Please refer to Figure 2 (1) for more details. It consists of three components: 1) Self-Attentive Encoder (SA) (in orange) is a transformer encoder as described in (Vaswani et al., 2017), which encodes the document knowledge and the current utterance independently. 2) Incremental Transformer Encoder (ITE) (on the top) is a unified transformer encoder which encodes multi-turn utterances with knowledge representation using an incremental encoding scheme. This module takes previous utterances u(i) and the document s(i)’s SA representation as input, and use attention mechanism to incrementally build up the representation of relevant context and document knowledge. 3) Deliberation Decoder (DD) (on the bottom) is a two-pass unified transformer decoder for better generating the next response. The first-pass decoder takes current utterance u(k)’s SA representation and ITE output as input, and mainly relies on conversation context for response generation. The second-pass decoder takes the SA representation of the first pass result and the relevant document s(k+1)’s SA representation as input, and uses document knowledge to further refine the response. Self-Attentive Encoder As document knowledge often includes several sentences, it’s important to capture long-range dependencies and identify relevant information. We use multi-head self-attention (Vaswani et al., 2017) to compute the representation of document knowledge. As shown in Figure 2 (a), we use a selfattentive encoder to compute the representation of the related document knowledge s(k). The input (In(k) s ) of the encoder is a sequence of document words embedding with positional encoding added.(Vaswani et al., 2017): In(k) s = [s(k) 1 , ..., s(k) J ] (2) s(k) j = esj + PE(j) (3) where esj is the word embedding of s(k) j and PE(·) denotes positional encoding function. The Self-Attentive encoder contains a stack of Nx identical layers. Each layer has two sublayers. The first sub-layer is a multi-head selfattention (MultiHead) (Vaswani et al., 2017). MultiHead(Q, K, V) is a multi-head attention function that takes a query matrix Q, a key matrix K, and a value matrix V as input. In current case, Q = K = V. That’s why it’s called self-attention. And the second sub-layer is a simple, position-wise fully connected feed-forward network (FFN). This FFN consists of two linear transformations with a ReLU activation in between. (Vaswani et al., 2017). A(1) = MultiHead(In(k) s , In(k) s , In(k) s ) (4) D(1) = FFN(A(1)) (5) FFN(x) = max(0, xW1 + b1)W2 + b2 (6) 15 where A(1) is the hidden state computed by multihead attention at the first layer, D(1) denotes the representation of s(k) after the first layer. Note that residual connection and layer normalization are used in each sub-layer, which are omitted in the presentation for simplicity. Please refer to (Vaswani et al., 2017) for more details. For each layer, repeat this process: A(n) = MultiHead(D(n−1), D(n−1), D(n−1)) (7) D(n) = FFN(A(n)) (8) where n = 1, ..., Ns and D(0) = In(k) s . We use SAs(·) to denote this whole process: d(k) = D(Nx) = SAs(s(k)) (9) where d(k) is the final representation for the document knowledge s(k). Similarly, for each utterance u(k), we use In(k) u = [u(k) 1 , ..., u(k) I ] to represent the sequence of the position-aware word embedding. Then the same Self-Attentive Encoder is used to compute the representation of current utterance u(k), and we use SAu(u(k)) to denote this encoding result. The Self-Attentive Encoder is also used to encode the document s(k+1) and the first pass decoding results in the second pass of the decoder. Note that SAs and SAu have the same architecture but different parameters. More details about this will be mentioned in the following sections. Incremental Transformer Encoder To encode multi-turn document grounded utterances effectively, we design an Incremental Transformer Encoder. Incremental Transformer uses multi-head attention to incorporate document knowledge and context into the current utterance’s encoding process. This process can be stated recursively as follows: c(k) = ITE(c(k−1), d(k), In(k) u ) (10) where ITE(·) denotes the encoding function, c(k) denotes the context state after encoding utterance u(k), c(k−1) is the context state after encoding last utterance u(k−1), d(k) is the representation of document s(k) and In(k) u is the embedding of current utterance u(k). As shown in Figure 2 (b), we use a stack of Nu identical layers to encode u(k). Each layer consists of four sub-layers. The first sub-layer is a multihead self-attention: B(n) = MultiHead(C(n−1), C(n−1), C(n−1)) (11) where n = 1, ..., Nu, C(n−1) is the output of the last layer and C(0) = In(k) u . The second sub-layer is a multi-head knowledge attention: E(n) = MultiHead(B(n), d(k), d(k)) (12) The third sub-layer is a multi-head context attention: F(n) = MultiHead(E(n), c(k−1), c(k−1)) (13) where c(k−1) is the representation of the previous utterances. That’s why we called the encoder ”Incremental Transformer”. The fourth sub-layer is a position-wise fully connected feed-forward network: C(n) = FFN(F(n)) (14) We use c(k) to denote the final representation at Nu-th layer: c(k) = C(Nu) (15) Deliberation Decoder Motivated by the real-world human cognitive process, we design a Deliberation Decoder containing two decoding passes to improve the knowledge relevance and context coherence. The first-pass decoder takes the representation of current utterance SAu(u(k)) and context c(k) as input and focuses on how to generate responses contextual coherently. The second-pass decoder takes the representation of the first-pass decoding results and related document s(k+1) as input and focuses on increasing knowledge usage and guiding the following conversations within the scope of the given document. When generating the i-th response word u(k+1) i , we have the generated words u(k+1) <i as input (Vaswani et al., 2017). We use In(k+1) r to denote the matrix representation of u(k+1) <i as following: In(k+1) r = [u(k+1) 0 , u(k+1) 1 , ..., u(k+1) i−1 ] (16) where u(k+1) 0 is the vector representation of sentence-start token. As shown in Figure 2 (c), the Deliberation Decoder consists of a first-pass decoder and a second-pass decoder. These two decoders have the same architecture but different input for sublayers. Both decoders are composed of a stack of Ny identical layers. Each layer has four sublayers. For the first-pass decoder, the first sublayer is a multi-head self-attention: G(n) 1 = MultiHead(R(n−1) 1 , R(n−1) 1 , R(n−1) 1 ) (17) 16 where n = 1, ..., Ny, R(n−1) 1 is the output of the previous layer, and R(0) 1 = In(k+1) r . The second sub-layer is a multi-head context attention: H(n) 1 = MultiHead(G(n) 1 , c(k), c(k)) (18) where c(k) is the representation of context u≤k. The third sub-layer is a multi-head utterance attention: M(n) 1 = MultiHead(H(n) 1 , SAu(u(k)), SAu(u(k))) (19) where SAu(·) is a Self-Attentive Encoder which encodes latest utterance u(k). Eq. (18) mainly encodes the context and document knowledge relevant to the latest utterance, while Eq. (19) encodes the latest utterance directly. We hope optimal performance can be achieved by combining both. The fourth sub-layer is a position-wise fully connected feed-forward network: R(n) 1 = FFN(M(n) 1 ) (20) After Ny layers, we use softmax to get the words probabilities decoded by first-pass decoder: P(ˆu(k+1) (1) ) = softmax(R(Ny) 1 ) (21) where ˆu(k+1) (1) is the response decoded by the firstpass decoder. For second-pass decoder: G(n) 2 = MultiHead(R(n−1) 2 , R(n−1) 2 , R(n−1) 2 ) (22) H(n) 2 = MultiHead(G(n) 2 , d(k+1), d(k+1)) (23) M(n) 2 = MultiHead(H(n) 2 , SAu(ˆu(k+1) (1) ), SAu(ˆu(k+1) (1) )) (24) R(n) 2 = FFN(M(n) 2 ) (25) P(ˆu(k+1) (2) ) = softmax(R(Ny) 2 ) (26) where R(n−1) 2 is the counterpart to R(n−1) 1 in pass two decoder, referring to the output of the previous layer. d(k+1) is the representation of document s(k+1) using Self-Attentive Encoder, ˆu(k+1) (2) is the output words after the second-pass decoder. Training In contrast to the original Deliberation Network (Xia et al., 2017), where they propose a complex joint learning framework using Monte Carlo Method, we minimize the following loss as Xiong et al. (2018) do: Lmle = Lmle1 + Lmle2 (27) Lmle1 = − K X k=1 I X i=1 log P(ˆu(k+1) (1)i ) (28) Lmle2 = − K X k=1 I X i=1 log P(ˆu(k+1) (2)i ) (29) 3 Experiments 3.1 Dataset We evaluate our model using the Document Grounded Conversations Dataset (Zhou et al., 2018). There are 72922 utterances for training, 3626 utterances for validation and 11577 utterances for testing. The utterances can be either casual chats or document grounded. Note that we consider consequent utterances of the same person as one utterance. For example, we consider A: Hello! B: Hi! B: How’s it going? as A: Hello! B: Hi! How’s it going?. And there is a related document given for every several consequent utterances, which may contain movie name, casts, introduction, ratings, and some scenes. The average length of documents is about 200. Please refer to (Zhou et al., 2018) for more details. 3.2 Baselines We compare our proposed model with the following state-of-the-art baselines: Models not using document knowledge: Seq2Seq: A simple encoder-decoder model (Shang et al., 2015; Vinyals and Le, 2015) with global attention (Luong et al., 2015). We concatenate utterances context to a long sentence as input. HRED: A hierarchical encoder-decoder model (Serban et al., 2016), which is composed of a word-level LSTM for each sentence and a sentence-level LSTM connecting utterances. Transformer: The state-of-the-art NMT model based on multi-head attention (Vaswani et al., 2017). We concatenate utterances context to a long sentence as its input. Models using document knowledge: Seq2Seq (+knowledge) and HRED (+knowledge) are based on Seq2Seq and HRED respectively. They both concatenate document knowledge representation and last decoding output embedding as input when decoding. Please refer to (Zhou et al., 2018) for more details. Wizard Transformer: A Transformer-based model for multi-turn open-domain dialogue with unstructured text facts (Dinan et al., 2018). It concatenates context utterances and text facts to a long 17 Knowledge Context Model PPL BLEU(%) Fluency Relevance Coherence Seq2Seq without knowledge 80.93 0.38 1.62 0.18 0.54 HRED without knowledge 80.84 0.43 1.25 0.18 0.30 Transformer without knowledge 87.32 0.36 1.60 0.29 0.67 Seq2Seq (+knowledge) 78.47 0.39 1.50 0.22 0.61 HRED (+knowledge) 79.12 0.77 1.56 0.35 0.47 Wizard Transformer 70.30 0.66 1.62 0.47 0.56 ITE+DD (ours) 15.11 0.95 1.67 0.56 0.90 ITE+CKAD (ours) 64.97 0.86 1.68 0.50 0.82 KAT (ours) 65.36 0.58 1.58 0.33 0.78 Table 1: Automatic evaluation and manual evaluation results for baselines and our proposed models. Knowledge Context Model Relevance(%) Coherence(%) Wizard 64/25/11 58/28/14 ITE+CKAD 67/16/17 40/37/23 ITE+DD 64/16/20 38/34/28 Table 2: The percent(%) of score (0/1/2) of Knowledge Relevance and Context Coherence for Wizard Transformer, ITE+CKAD and ITE+DD. sequence as input. We replace the text facts with document knowledge. Here, we also conduct an ablation study to illustrate the validity of our proposed Incremental Transformer Encoder and Deliberation Decoder. ITE+CKAD: It uses Incremental Transformer Encoder (ITE) as encoder and ContextKnowledge-Attention Decoder (CKAD) as shown in Figure 2 (e). This setup is to test the validity of the deliberation decoder. Knowledge-Attention Transformer (KAT): As shown in Figure 2 (d), the encoder of this model is a simplified version of Incremental Transformer Encoder (ITE), which doesn’t have context-attention sub-layer. We concatenate utterances context to a long sentence as its input. The decoder of the model is a simplified Context-Knowledge-Attention Decoder (CKAD). It doesn’t have context-attention sub-layer either. This setup is to test how effective the context has been exploited in the full model. 3.3 Experiment Setup We use OpenNMT-py1 (Klein et al., 2017) as the code framework2. For all models, the hidden size is set to 512. For rnn-based models (Seq2Seq, HRED), 3-layer bidirectional LSTM (Hochreiter 1https://github.com/OpenNMT/OpenNMT-py 2The code and models are available at https:// github.com/lizekang/ITDD and Schmidhuber, 1997) and 1-layer LSTM is applied for encoder and decoder respectively. For transformer-based models, the layers of both encoder and decoder are set to 3. The number of attention heads in multi-head attention is 8 and the filter size is 2048. The word embedding is shared by utterances, knowledge and generated responses. The dimension of word embedding is set to 512 empirically. We use Adam (Kingma and Ba, 2014) for optimization. When decoding, beam size is set to 5. We use the previous three utterances and its related documents as input. 3.4 Evaluation Metrics Automatic Evaluation: We adopt perplexity (PPL) and BLEU (Papineni et al., 2002) to automatically evaluate the response generation performance. Models are evaluated using perplexity of the gold response as described in (Dinan et al., 2018). Lower perplexity indicates better performance. BLEU measures n-gram overlap between a generated response and a gold response. However, since there is only one reference for each response and there may exist multiple feasible responses, BLEU scores are extremely low. We compute BLEU score by the multi-bleu.perl3 Manual Evaluation: Manual evaluations are essential for dialogue generation. We randomly sampled 30 conversations containing 606 utterances from the test set and obtained 5454 utterances from the nine models. We have annotators score these utterances given its previous utterances and related documents. We defined three metrics - fluency, knowledge relevance (Liu et al., 2018) and context coherence for manual evaluation. All these metrics are scored 0/1/2. fluency: Whether the response is natural and 3https://github.com/google/seq2seq/ blob/master/bin/tools/multi-bleu.perl 18 Document moviename despicable me ... cast: steve carell as gru ... rotten tomatoes: 81% ... it is the debut film of illumination entertainment. ... gru, a supervillain, is disheartened when an unknown supervillain steals the great pyramid of giza. gru, with the assistance of his colleague dr. Nefario and his minions, resolves to one-up this mystery rival by shrinking and stealing the moon. as it would be costly to steal the moon, gru seeks a loan from the bank of evil. ... upon learning that an up-and-coming villain named vector was responsible for the pyramid theft ... Utterance 1 what do you think about despicable me? ITE+DD i think it is an awesome movie, what about you? Wizard i think it’s a great movie! Utterance 2 i rather liked it. it wasn’t what i was expecting but it was still good. ITE+DD yeah, it makes me laugh, even when they make a movie about it. Wizard it’s a great movie for kids and adults. Utterance 3 i like it too. funny and heart warming at the same time. ITE+DD i think it’s interesting that gru decided to take a loan from the bank of evil. Wizard i think they should have made a sequel. Utterance 4 gru was a really interesting character, too. i liked steve carell and the voice he used for gru. ITE+DD yeah, now that you mention it, what did you think of the minions? Wizard i think the entire cast was great. Utterance 5 yes, he did an excellent job portraying the character. 81% in ratings i think is great. ITE+DD yes, 81% is a great score! Wizard i also liked the villain of the movie. Table 3: Responses generated by Incremental Transformer with Deliberation Decoder(ITE+DD) and the Wizard Transformer(Wizard). These utterances are from continuous dialogues. ITE+DD and Wizard generate responses using context utterances and a given document. Note that Utterance i + 1 is the gold response of Utterance i. fluent. Score 0 represents not fluent and incomprehensible; 1 represents partially fluent but still comprehensible; 2 represents totally fluent. knowledge relevance: Whether the response uses relevant and correct knowledge. Score 0 represents no relevant knowledge; 1 represents containing relevant knowledge but not correct; 2 represents containing relevant knowledge and correct. context coherence: Whether the response is coherent with the context and guides the following utterances. Score 0 represents not coherent or leading the dialogue to an end; 1 represents coherent with the utterance history but not guiding the following utterances; 2 represents coherent with utterance history and guiding the next utterance. 3.5 Experimental Results Table 1 shows the automatic and manual evaluation results for both the baseline and our models. In manual evaluation, among baselines, Wizard Transformer and RNN without knowledge have the highest fluency of 1.62 and Wizard obtains the highest knowledge relevance of 0.47 while Transformer without knowledge gets the highest context coherence of 0.67. For all models, ITE+CKAD obtains the highest fluency of 1.68 and ITE+DD has the highest Knowledge Relevance of 0.56 and highest Context Coherence of 0.90. In automatic evaluation, our proposed model has lower perplexity and higher BLEU scores than baselines. For BLEU, HRED with knowledge obtains the highest BLEU score of 0.77 among the baselines. And ITE+DD gets 0.95 BLEU score, which is the highest among all the models. For perplexity, Wizard Transformer obtains the lowest perplexity of 70.30 among baseline models and ITE+DD has remarkably lower perplexity of 15.11 than all the other models. A detailed analysis is in Section 3.6. 3.6 Analysis and Discussion To our surprise, ITE+DD reaches an extremely low ground truth perplexity. We find that the ground truth perplexity after the first-pass decoding is only similar to the ITE+CKAD. It shows that the second-pass decoder utilizes the document knowledge well, and dramatically reduced the ground truth perplexity. As shown in Table 2, ITE+DD has a higher percent of score 2 both on Knowledge Relevance and 19 ID Utterance Two-pass Responses 1 I think rachel mcadams had an even better role as regina george however! would you agree? i’m not a fan of kristen bell, but i think she did a great job. i’m not a huge fan of rachel mcadams, but he did a great job. 2 yeah, I guess that’s always worth it, and a truce was made as well. yeah, not only does she reconcile with the plastics. yeah, she reconciles with janis , damien and aaron. 3 i liked the scene where buzz thinks he’s a big shot hero but then the camera reveals him to be a tiny toy. i think that’s one of the best scenes in the movie. oh, i think that is what makes the movie unique as well. have you seen any of the other pixar movies? Table 4: Examples of the two pass decoding. Underlined texts are the differences between two results. For each case, the first-pass response is on the top. Context Coherence than ITE+CKAD. This result also demonstrates that Deliberation Decoder can improve the knowledge correctness and guide the following conversations better. Although the perplexity of ITE+CKAD is only slightly better than KAT, the BLEU score, Fluency, Knowledge Relevance and Context Coherence of ITE+CKAD all significantly outperform those of KAT model, which indicates that Incremental Transformer can deal with multi-turn document grounded conversations better. Wizard Transformer has a great performance on Knowledge Relevance only second to our proposed Incremental Transformer. However, its score on Context Coherence is lower than some other baselines. As shown in Table 2, Wizard Transformer has Knowledge Relevance score 1 results twice more than score 2 results, which indicates that the model tends to generate responses with related knowledge but not correct. And the poor performance on Context Coherence also shows Wizard Transformer does not respond to the previous utterance well. This shows the limitation of representing context and document knowledge by simple concatenation. 3.7 Case Study In this section, we list some examples to show the effectiveness of our proposed model. Table 3 lists some responses generated by our proposed Incremental Transformer with Deliberation Decoder (ITE+DD) and Wizard Transformer (which achieves overall best performance among baseline models). Our proposed model can generate better responses than Wizard Transformer on knowledge relevance and context coherence. To demonstrate the effectiveness of the twopass decoder, we compare the results from the first-pass decoding and the second-pass decoding. Table 4 shows the improvement after the secondpass decoding. For Case 1, the second-pass decoding result revises the knowledge error in the first-pass decoding result. For Case 2, the secondpass decoder uses more detailed knowledge than the first-pass one. For Case 3, the second-pass decoder cannot only respond to the previous utterance but also guide the following conversations by asking some knowledge related questions. 4 Related Work The closest work to ours lies in the area of opendomain dialogue system incorporating unstructured knowledge. Ghazvininejad et al. (2018) uses an extended Encoder-Decoder where the decoder is provided with an encoding of both the context and the external knowledge. Parthasarathi and Pineau (2018) uses an architecture containing a Bag-of-Words Memory Network fact encoder and an RNN decoder. Dinan et al. (2018) combines Memory Network architectures to retrieve, read and condition on knowledge, and Transformer architectures to provide text representation and generate outputs. Different from these works, we greatly enhance the Transformer architectures to handle the document knowledge in multi-turn dialogue from two aspects: 1) using attention mechanism to combine document knowledge and context utterances; and 2) exploiting incremental encoding scheme to encode multi-turn knowledge aware conversations. Our work is also inspired by several works in other areas. Zhang et al. (2018) introduces document context into Transformer on document-level Neural Machine Translation (NMT) task. Guan et al. (2018) devises the incremental encoding scheme based on rnn for story ending generation task. In our work, we design an Incremental Transformer to achieve a knowledge-aware context representation using an incremental encoding scheme. Xia et al. (2017) first proposes Deliberation Network based on rnn on NMT task. Our Deliberation Decoder is different in two aspects: 1) We clearly devise the two decoders targeting context and knowledge respectively; 2) Our sec20 ond pass decoder directly fine tunes the first pass result, while theirs uses both the hidden states and results from the first pass. 5 Conclusion and Future Work In this paper, we propose an Incremental Transformer with Deliberation Decoder for the task of Document Grounded Conversations. Through an incremental encoding scheme, the model achieves a knowledge-aware and context-aware conversation representation. By imitating the real-world human cognitive process, we propose a Deliberation Decoder to optimize knowledge relevance and context coherence. Empirical results show that the proposed model can generate responses with much more relevance, correctness, and coherence compared with the state-of-the-art baselines. In the future, we plan to apply reinforcement learning to further improve the performance. 6 Acknowledgments This work is supported by 2018 Tencent RhinoBird Elite Training Program, National Natural Science Foundation of China (NO. 61662077, NO.61876174) and National Key R&D Program of China (NO.YS2017YFGH001428). We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions. References Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1870–1879. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Thirty-Second AAAI Conference on Artificial Intelligence. Jian Guan, Yansen Wang, and Minlie Huang. 2018. Story ending generation with incremental encoding and commonsense knowledge. arXiv preprint arXiv:1808.10113. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Hsin-Yuan Huang, Eunsol Choi, and Wen-tau Yih. 2018. Flowqa: Grasping flow in history for conversational machine comprehension. arXiv preprint arXiv:1810.06683. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, and Dawei Yin. 2018. Knowledge diffusion for neural dialogue generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1489–1498. Hao-Tong Ye Kai-Ling Lo and Shang-Yu Su Yun-Nung Chen. 2019. Knowledge-grounded response generation with deep attentional latent-variable model. Thirty-Third AAAI Conference on Artificial Intelligence. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1468–1478. Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. pages 311–318. Prasanna Parthasarathi and Joelle Pineau. 2018. Extending neural generative conversational model using external knowledge sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 690–695. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you dont know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 784–789. 21 Siva Reddy, Danqi Chen, and Christopher D Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Thirtieth AAAI Conference on Artificial Intelligence. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1577–1586. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass decoding. In Advances in Neural Information Processing Systems, pages 1784–1794. Hao Xiong, Zhongjun He, Hua Wu, and Haifeng Wang. 2018. Modeling coherence for discourse neural machine translation. arXiv preprint arXiv:1811.05683. Semih Yavuz, Abhinav Rastogi, Guan-lin Chao, Dilek Hakkani-T¨ur, and Amazon Alexa AI. 2018. Deepcopy: Grounded response generation with hierarchical pointer networks. Advances in Neural Information Processing Systems. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541. Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018. Improving the transformer translation model with document-level context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 533–542. Kangyan Zhou, Shrimai Prabhumoye, and Alan W Black. 2018. A dataset for document grounded conversations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 708–713.
2019
2